EP1972767A1 - A method for adapting a combustion engine control map - Google Patents

A method for adapting a combustion engine control map Download PDF

Info

Publication number
EP1972767A1
EP1972767A1 EP07104811A EP07104811A EP1972767A1 EP 1972767 A1 EP1972767 A1 EP 1972767A1 EP 07104811 A EP07104811 A EP 07104811A EP 07104811 A EP07104811 A EP 07104811A EP 1972767 A1 EP1972767 A1 EP 1972767A1
Authority
EP
European Patent Office
Prior art keywords
map
algorithm
samples
local
updating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP07104811A
Other languages
German (de)
French (fr)
Other versions
EP1972767B1 (en
Inventor
Erik Larsson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Volvo Car Corp
Original Assignee
Ford Global Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ford Global Technologies LLC filed Critical Ford Global Technologies LLC
Priority to EP20070104811 priority Critical patent/EP1972767B1/en
Priority to DE200760012825 priority patent/DE602007012825D1/en
Publication of EP1972767A1 publication Critical patent/EP1972767A1/en
Application granted granted Critical
Publication of EP1972767B1 publication Critical patent/EP1972767B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D41/00Electrical control of supply of combustible mixture or its constituents
    • F02D41/24Electrical control of supply of combustible mixture or its constituents characterised by the use of digital means
    • F02D41/2406Electrical control of supply of combustible mixture or its constituents characterised by the use of digital means using essentially read only memories
    • F02D41/2425Particular ways of programming the data
    • F02D41/2429Methods of calibrating or learning
    • F02D41/2477Methods of calibrating or learning characterised by the method used for learning
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D41/00Electrical control of supply of combustible mixture or its constituents
    • F02D41/24Electrical control of supply of combustible mixture or its constituents characterised by the use of digital means
    • F02D41/2406Electrical control of supply of combustible mixture or its constituents characterised by the use of digital means using essentially read only memories
    • F02D41/2425Particular ways of programming the data
    • F02D41/2429Methods of calibrating or learning
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D41/00Electrical control of supply of combustible mixture or its constituents
    • F02D41/24Electrical control of supply of combustible mixture or its constituents characterised by the use of digital means
    • F02D41/2406Electrical control of supply of combustible mixture or its constituents characterised by the use of digital means using essentially read only memories
    • F02D41/2425Particular ways of programming the data
    • F02D41/2429Methods of calibrating or learning
    • F02D41/2451Methods of calibrating or learning characterised by what is learned or calibrated
    • F02D41/2454Learning of the air-fuel ratio control
    • F02D41/2461Learning of the air-fuel ratio control by learning a value and then controlling another value

Definitions

  • the invention relates to a method for adapting a combustion engine control map, which map comprises a set of nodes where each node is represented by a local model.
  • Algorithms may be used for generating artificial samples and updating the engine control map based on a measured sample of an engine parameter.
  • Contemporary engine control systems contain a considerable amount of static maps.
  • the maps are linear or nonlinear functions of one or several variables, which often describe a physical phenomenon or a function with no apparent physical interpretation used by the control system.
  • the static maps are used by e.g. nonlinear controllers and static feed forward controllers.
  • Some static maps are adapted online with some optimization algorithm acting on an incoming sample. These samples are referred to as measurement samples, although they might not be generated by a physical measurement.
  • the reasons for the need of online adaptation are manifold. Three important reasons are aging of engines, mechanical differences between engines, and that the map can be dependent of many variables which are not practically possible to include as input variables. The inclusion of many variables is not practical partly due to an undesired necessity for high dimensional maps and partly due to uncertainty of how the variables influence the map.
  • the object of the invention is to improve the adaptation process to achieve more accurate maps. Improvements of adaptive static maps in accordance with the invention will in turn result in an improvement of the control system and consequently lowered emissions and fuel consumption.
  • the invention relates to a method for adapting a combustion engine or driveline control map, which map is expressed by a basis function equation.
  • the method involves the steps of:
  • generating artificial samples can be done using an actualizing algorithm, such as a local pattern regression model (LPRM).
  • LPRM local pattern regression model
  • the method involves updating the local models in a map represented by a look-up table.
  • the local models can be updated using a recursive least squares (RLS) algorithm, a direct adjustment (DA) algorithm, a least means squares (NLMS) or a normalized lest means squares (LMS) algorithm
  • RLS recursive least squares
  • DA direct adjustment
  • NLMS least means squares
  • LMS normalized lest means squares
  • the method involves updating the local models in a map represented by a local linear neuro-fuzzy model (LLNFM).
  • LNFM local linear neuro-fuzzy model
  • LM i the coordinates of a local models i
  • the local models can be updated using a recursive least squares (RLS) algorithm, a least means squares (NLMS) or a normalized lest means squares (LMS) algorithm.
  • RLS recursive least squares
  • NLMS least means squares
  • LMS normalized lest means squares
  • generating artificial samples can be done using a tent roof tensioning (TRT) algorithm.
  • TRT tent roof tensioning
  • the method involves updating the local models in a map represented by a local linear neuro-fuzzy model (LLNFM).
  • LNFM linear neuro-fuzzy model
  • the local models can be updated using a recursive least squares a (RLS) algorithm, a least means squares (NLMS) or a normalized lest means squares (LMS) algorithm.
  • RLS recursive least squares a
  • NLMS least means squares
  • LMS normalized lest means squares
  • the method involves updating the local models in a map represented by a look-up table.
  • the local models can be updated using a recursive least squares (RLS) algorithm,
  • the local models can be updated a least means squares (NLMS) or a normalized lest means squares (LMS) algorithm.
  • the invention relates to a vehicle comprising an electronic control unit for controlling a combustion engine or a vehicle driveline and sensors for measuring at least one engine or driveline related parameter, where the electronic control unit is provided with a map of measured or estimated samples for the said at least one parameter.
  • the map provided in the electronic control unit is adapted using the above method.
  • the invention also relates to a combustion engine comprising an electronic control unit for controlling said combustion engine and sensors for measuring at least one engine parameter.
  • the electronic control unit may be provided with a map of measured or estimated samples for at least one of the said engine parameters.
  • the map or maps may be adapted using the method described above.
  • Static maps with online adaptation will be viewed in a comprehensive perspective and thus referred to as adaptive static maps, where different components of the maps are reviewed from the perspective to design better adaptive maps.
  • static maps are meant nonlinear memory less functions that perform a mapping from input space X to output space Y, i.e. ⁇ ( x )X ⁇ Y, where X ⁇ R D , Y ⁇ R and D denotes the dimension of the input space and the map. Moreover no extrapolation outside the defined input space is considered.
  • the output space is always one-dimensional whereas the dimension of the input space is arbitrary, though only one and two dimensions will be considered here.
  • the map will receive measurement samples during operation.
  • the measurement samples consist of input output pairs ( x i , y i ) and the learning algorithm has no control of the operating point of these, thus passive learning.
  • the process which adapts the map with regard to these samples is referred to as online adaptation.
  • Initial parameter optimization and online adaptation is distinguished by the terms optimization and adaptation, respectively.
  • the measurement samples can be received in three different ways with respect to time:
  • the primary method that will be considered here for solving this problem is by employing actualization algorithms, which spread the adjustment to larger regions of the map.
  • the main purpose of the invention is to provide improved actualization algorithms.
  • the actualization algorithms should fulfil the following requirements:
  • Variables and functions that are indexed or followed by brackets with k or t always refer to time, discrete and continuous respectively.
  • Indexation with i or j refer to distinct variables, functions or values.
  • Static non-linear functions can be represented with many different model architectures. This chapter describes two different architectures which are suitable for adaptive static maps. The first is classic look-up tables and the other is called local linear neuro-fuzzy models (LLNFM) which is a modern architecture based on fuzzy logic.
  • LNFM local linear neuro-fuzzy models
  • map representations can be written on the basis function framework indicated in equation (1), where the output ⁇ is a sum of basis functions ⁇ i ( x , ⁇ i ( nl ) ) weighted with functions L i ( x , ⁇ i ) which are linear in its parameters ⁇ , these are referred to as linear functions.
  • the values of the basis functions are determined by the input x and its parameters ⁇ i nl .
  • the basis functions are in general nonlinear in its parameters ⁇ i nl .
  • Various map representations differ from each other by having different types of linear models and basis functions.
  • the basis function is often written in an abbreviated form without the parameters.
  • the parameters of nonlinear map representations can be separated in two categories; the basis function parameters ⁇ ( nl ) and the linear model parameters ⁇ .
  • model error y u represents a noise free measured map. This measure can be estimated from the training data used for the optimization of the map, which will be discussed next.
  • the model error can further be decomposed into bias error and variance error according to equation (3). This is described in Nelles, O. (2001). Nonlinear System Identification. Springer-Verlag, Berlin Heidelberg, hereinafter referred to as [Nelles].
  • the bias error is due to the inflexibility of the model.
  • the flexibility of the model is determined by how well the structure of the map representation describes the measured map and it grows with the number of parameters in the model.
  • the variance error arises from having parameter values which deviate from their optimal values.
  • Variance error is reduced by having a large number of training data S and minimizing the variance ⁇ n of the noise in the training samples, furthermore the variance error increases with the number of parameters in the model.
  • equation (4) a general approximate relation of how these variables influence the variance error is given, this hold generally regardless of map representation as described in [Nelles].
  • determining the number of parameters in a model is based on a trade-off between the bias and variance error. This trade-off will not be addressed in further detail here. It is though important to realize what can be accomplished with online adaptation. Every adaptation algorithm that is considered here will only change the values of the linear model parameters in the map representation of equation (1). Hence the minimization of the variance error with respect to the parameters ⁇ of the linear function is the conceivable overall goal of the adaptation algorithms. Variance error is from here on always considered with respect to the estimation of the linear model parameters.
  • Look-up tables consist of interpolation nodes which are distributed on a grid and are located by a coordinate c and each node is associated with a height ⁇ .
  • the output of a look-up table is given by interpolation between the nodes which spatially surrounds the input coordinate in each dimension and their belonging heights.
  • Look-up tables can be written on the basis function framework.
  • the height parameters correspond to the linear functions and the spatial interpolation between the nodes corresponds to the basis functions. Only the basis functions ⁇ i,j . that are within the active interpolation area are non-zero, see equation (13) where uniformly two dimensional look-up tables are assumed.
  • the nodes are usually uniformly distributed in the input space, but it is also possible to have a non-uniform distribution [Nelles]. From here on, when referring to look-up tables, a uniform distribution is assumed. It is also assumed that all nodes are fixed a priori. The optimization of the location and number of nodes will not be addressed here.
  • Determining the initial heights ⁇ of the map is a pure linear optimization task. It is done by minimizing the sum of squared errors in equation (15) over all measurement samples S and solved by the least squares algorithm of equation (16) [Nelles].
  • look-up tables In Nelles several properties of look-up tables are given, some of the more relevant are stated here. Three positive important benefits of using look-up tables are that they have high evaluation speed, the parameters of the linear models can be optimized fast, and they are simple to implement. Two negative properties are non-smoothness and that they suffer severely from the curse of dimensionality. With curse of dimensionality is meant that the memory requirements grow fast with the dimension of the map.
  • LLNFMs can also be written on the basis function framework given above.
  • the basis functions in LLNFMs are normalized Gaussian functions, see equation (19). These Gaussian functions are also referred to as validity functions, because they give the size of how much their respective local models are affecting the output. By normalized are meant that the sum of all M basis functions always ad up to one, see equation (20). It is also assumed that the Gaussian functions are axis orthogonal, i.e. the parameters which determine the width and position of the function in each dimension are independent of each other.
  • c i,j is the center position of the validity function i in dimension j and ⁇ i,j determines the width of validity function i in dimension j .
  • the linear functions in the basis function framework are here made up of local linear models (LLM) with respect to the input coordinate x, see equation (21).
  • LLM local linear models
  • the output of the LLNFM results in equation (22).
  • the Gaussian validity functions are always non-zero, thus all the LLMs always contribute to the output, although in varying degree with respect to the input coordinate.
  • the model parameters can be divided into two categories; those in the basis function and those in the linear function.
  • the parameters in the linear function are clearly the LLM parameters ⁇ i,j . while the parameters in the basis function are the center positions c i,j of the validity functions and their widths ⁇ i,j , . This is advantageous because if the validity functions are specified, which is done by c i,j and ⁇ i,j , the linear model parameters are easily optimized with least squares.
  • the global approach is particularly unsuitable for the online adaptation, which is the focus here; because of its high computational complexity and its less robust behavior (more about online adaptation of LLNFM see Section 33).
  • the local estimation is derived from the loss function in equation (23) for LLM i over all S sample pairs( x j ,y j ). Note that the errors are weighted with the current validity functions.
  • This optimization problem is solved with weighted least squares, in equation (24), where the parameter vector, weight matrix, and regressor matrix are given by equation (25). Observe that the regressor matrices are independent of i , since all data samples are used in the estimation of every LLM.
  • Equation (26) From the general expression of the variance error of the model output given in equation (5), an expression for LLNFM follows equation (26), where it is assumed that the estimation is done with the local method given above [Nelles].
  • the parameter variance error for each LLM is given in equation (27), where the two last equalities are based on the assumption of white noise.
  • the regressor and weight matrix are given by equation (25).
  • LLNFM Some relevant beneficial properties of LLNFM are fast linear model parameter optimization, easily controlled smoothness of the map, and the curse of dimensionality is low [Nelles]. Hence in applications with high dimensional maps the LLNFM is a better choice than look-up table if memory requirements are crucial. One relevant negative property is that they only have medium evaluation speed [Nelles]. Look-up tables are by far more frequently used in applications, thus look-up tables will be the most considered map representation in the remaining chapters.
  • NLMS normalized least mean squares
  • the recursive least squares (RLS) algorithm is a linear second order optimization method [Nelles]. It is summarized in (30)-(32), where X and ⁇ are general regressors and parameters respectively. Its time complexity is of the order O( #parameters 2 ) and it is also extended with forgetting factor ⁇ , and weight factor w [Nelles].
  • the forgetting factory ⁇ enables the algorithm to follow time-variant processes.
  • the value of it is determined by the stability / plasticity trade-off; good noise attenuation (large ⁇ ) versus fast learning (small ⁇ ). Usually the value is set between 0.9 and 1.
  • the algorithm also has a weight factor which determines how much a sample should influence the estimation.
  • equation (32) will be reduced to approximately equation (33). This is described in ⁇ ström, K.J. and Wittenmark, B. (1989). Adaptive Control. Addison-Wesley Publishing Company , hereinafter referred to as [ ⁇ ström and Wittenmark, 1989].
  • a simple way to overcome this problem is to have a restriction in the algorithm which stops it when the residual ⁇ or PX becomes smaller than a dead-zone [ ⁇ ström and Wittenmark, 1989].
  • P k P ⁇ k - 1 / ⁇
  • the diagonal elements in the covariance-matrix P are stored in a variance matrix V in memory.
  • the new diagonal elements in the P -matrix for the new interpolation area are given by the stored variance matrix, while the covariance elements (non-diagonal) are set to zero.
  • the variance elements can be given relatively large values (100-1000) which will give a fast initial convergence. High variance values indicate uncertain values.
  • the flow chart of the modified RLS algorithm is depicted in Figure 3 .
  • the optimization of the linear model parameters is done locally, i.e. the local linear models are adapted one at a time. But contrary to the offline optimization which optimizes all LLM to every sample, the online adaptation adapts only those LLM whose validity function is larger than a threshold ⁇ i ( x ) > ⁇ thr in the operating point of each sample.
  • the primary method that will be considered here for solving this problem is by employing actualization algorithms, which spread the adjustment to larger regions of the map.
  • the purpose of the invention is to provide improved actualization algorithms.
  • the actualization algorithms should fulfil the following requirements:
  • Section 41 describes the tent roof tension (TRT) algorithm which was presented in Heiss, M. (1997). Online Learning or Tracking of Discrete Input-Output Maps. IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS-PART A: SYSTEMS AND HUMANS, VOL. 27, NO. 5, September 1997 , hereinafter referred to as [Heiss, 1997] and suggest a modified version which is applicable to the current case, while Section 42 will introduce a new algorithm for spreading adjustments to larger portions of the map.
  • TRT tent roof tension
  • the value of a local model ⁇ ( i ) is defined as the map value in the coordinated x ( i ) of the local model.
  • artificial samples are always generated in the coordinates of the local models.
  • the information in the actualization is an estimate of the map value in the coordinates of the LMs and no information is given of how the details of the map around the coordinates of the LMs should look like. Therefore these details should be kept unchanged if possible. Thanks to this framework the actualization algorithms are described in general terms and are applicable on both map representations presented above and compatible with the update algorithms given in Section 2.
  • the general LMs refer to the individual nodes in the map. Note that when DA is used, the update of a measurement sample will adjust the two surrounding LMs in each dimension, while artificial samples only affect the LM on which the sample is placed. This holds for the modified RLS as well. However, when an artificial sample in say LM i is updated with the RLS, the height parameter of the neighboring node i +1 (or i -1) will not change its value, but its belonging variance value in V do change in the update of the P -matrix. Hence the variance value of i +1 (or i -1 ) should be kept unchanged in the update.
  • the LLNFM architecture is based on local linear models (LLM), thus the general LMs simply refer to these given by the architecture.
  • the artificial samples should only change the bias parameter of the actualized LLM and the rake parameter should be unchanged. If the RLS is used as update algorithm, only the bias parameter and its variance value should be changed. The other values; the rake parameter and its variance value and also the non-diagonal covariance values in the P-matrix should be kept unchanged in the update.
  • the algorithm was given in [Heiss, 1997] where it is implemented on a look-up table with discrete input space i.e. X ⁇ D . Thus no interpolation is done between the nodes, each allowed input is associated with a single height and the update is done with simple direct adjustment (DA).
  • Algorithm 4.1 The algorithm is summarized in Algorithm 4.1.
  • Figure illustrates a tent roof tensioning with DA on a look-up table, wherein artificial samples are indicated by ( ), a measurement sample by ( ⁇ ), an estimated map before adaptation by (_._), and an estimated map after adaptation by (__) .
  • the algorithm starts by placing a base for the tent, which forms a square around the updated local models ( i 1 ,i 2 ), ( i 1 ,i 2 +1) , ( i, +1 ,i 2 ), ( i 1 + 1,i 2 +1 ).
  • the square is given by connecting nodes between the nodes ( i 1 , - r , i 2 - r ),( i , - r , i 2 + 1 + r ),( i 1 +1 + r , i 2 - r ),( i 1 +1 + r , i 2 + 1+ r ), which are the corners of the square.
  • the tent roof is subsequently formed by linear interpolation between the base square and the square formed by the nodes in the interpolation area of the operating point ( i 1 ,i 2 ), ( i 1 ,i 2 +1 ), ( i 1 +1 ,i 2 ) , ( i 1 +1 ,i 2 +1) , and all the nodes within this roof are given artificial samples, with values given by the tent roof in the coordinate of the actualized LMs.
  • the tent roof is formed by linear interpolation between LM ⁇ ( i ) and LMs ⁇ ( i - r ) , ⁇ ( i + r ) . Note that the tent roof is in general asymmetric with respect to ⁇ ( i ) due to the non-uniform distribution of the LMs, i.e.
  • a simple solution is to form a tent base with the distance r irrespective of the LMs, i.e. x i 1 i 2 ⁇ r j , x i 1 i 2 ⁇ j r , j ⁇ - r + r , and actualize all LM within the tent base.
  • the tent base is thereafter formed by drawing straight lines between the coordinates of all the identified LM j which surround LM i .
  • the values on the tent base needed for the tent roof can be read directly from the value of the map in the needed coordinate. All local models / within this tent base, i.e.
  • ⁇ ⁇ i ⁇ tentBase , where ⁇ l ( x ( i ) ) ⁇ ⁇ tentBase ,are subsequently actualized analogously to the case of two dimensional look-up tables.
  • the radius of the tent base is determined by ⁇ tentBase instead of by r.
  • the tent base has not the form of a square but of a polygon, due to the non-uniform distribution of the local models.
  • the first subsection will give the basic version of the algorithm and the second will discuss various extensions to it.
  • Equation (45) there exists a local straight line model in every transition between two adjacent LMs (equation (45)), these are referred to as regression models. They give an estimated relationship between the values of two neighboring LMs.
  • the values of the artificial samples are given by these local pattern regression models. If e.g. an artificial sample is to be generated in LM i +1 the value of it is estimated from the value of a neighboring LM e.g. i , with the regression model between the two of them.
  • equation (45) is independent of the input coordinate x of the map, merely the value of LM i determines the value of the artificial sample in LM i +1.
  • the basic version of the algorithm starts by updating the operating point ⁇ k +1 ( ⁇ )which is between say local models i and i +1. Thereafter artificial samples are created in local models i -1 and i +2 and the update algorithm is applied on these samples. Subsequently artificial samples are created in local models i -2 and i +3 from the levels of local models i -1 and i +2 respectively. This continues until the whole map is adapted.
  • the basic version of the local pattern regression models (LPRM) algorithm, implemented on a one-dimensional look-up table is summarized in an iterative form in Algorithm 4.3.
  • the individual regression models are defined by two parameters; rake ⁇ 1 and bias ⁇ 0 . With these, different patterns between the two belonging local models can be represented. A change of the height in LM i can for example result in a large or a small height change in LM i +1. It is even possible to have a negative rake parameter, which results in a height increase in i +1 when the height of i decreases. With this discussion in mind and remembering that each transition has a unique regression model, one may conclude that these simple regression models can give complex patterns in the actualization of the map. This conclusion is verified by the simulations in Section 5.
  • Figures 5a -5e illustrate an example of LPRM, artificial samples indicated by ( ), a measurement sample by ( ⁇ ), an estimated map before adaptation by ( - ⁇ -), and an estimated map after adaptation by (__).
  • a simple example of the algorithm with the DA used as update algorithm is illustrated in Figure 5a 5e .
  • the example starts with the map receiving a measurement sample y between LM 1 and LM 2 in Fig. 5a . These samples are subsequently updated with the DA algorithm. Thereafter an artificial sample a y (3) is created in LM 3 with the value given by a regression model that estimates the value in LM3 from LM2 (regression model 2-3), as shown in Fig.
  • LM 3 is accordingly updated with respect to the artificial sample, as shown in Fig.5c .
  • the same procedure follows by forming an artificial sample in LM 3 based on the level of LM 2 and their intermediate regression model.
  • Fig. 5d shows an estimation of an artificial sample in LM 4 from LM 3.
  • Fig 5e LM 4 receives artificial sample.
  • the first is a method which weighs the artificial samples according to their uncertainty; the second extension is a way to limit the actualization to areas which have not received real samples within a predetermined time; while the third extension is a method to adapt the regression models online.
  • Some extensions and associated problems assume that RLS is used as update algorithm, thus the RLS will be the standard update algorithm in this subsection. In the end of the subsection different modifications due to map representation and map dimension will be discussed.
  • Equations (46)-(48) [Milton and Arnold, 1995] gives the confidence interval conf i,i +1 when an artificial sample is generated in LM i +1 and estimated from the level of LM i .
  • the parameter t ⁇ /2 is given by the student-t distribution with respect to the number of samples N and the degree of confidence100(1- ⁇ ).
  • the sizes of the weights are based on the sum of the confidence intervals of the used regression models between the artificial sample and the operating point.
  • the values of the weights w used in the update must have the relationship to the sum of confidence intervals confSum given in conditions (49) below.
  • the sum of confidence intervals confSum is formed by summing all confidence intervals associated with every estimated artificial sample from the operating point in LM 1 to the latest artificial sample in LM i +1, according to equation (50). This transformation is done with a decreasing exponential function according to equation (51).
  • the choice of the exponential function is based on the following arguments.
  • the function is a well known and an easily predictable function; it will quickly converge to zero when confSum begins to grow and this convergence rate is easily determined by a constant ⁇ .
  • Other choices of transformations are naturally possible.
  • T act is complex and depends on many uncertain parameters. Summarizing the trade-off; if ⁇ avg is high T act should be small and if the structural change rate is believed to be high or the regression models are uncertain in any way, T act should be large.
  • Regression model adaptation It is possible to adapt the regression models online, so that they can keep track of structural changes in the map. This is done by storing N pairs of heights of adjacent local models, where the levels are measured at the same time. How these height pairs are collected is described below. By replacing older stored sample pairs with new ones, the local pattern regression model can be re-estimated. The theory of estimating linear regression models is based on the Gauss-Markov assumptions, which are given below.
  • outliers have a big leverage on the values of the model parameters, because the estimation is based on squared errors. This could have the effect that one or a few less accurate measurements could severely distort the estimation. But the outliers should not completely be discarded because they could be the result of structural changes. This problem is solved by the introduction of a limit on the distance an outlier is permitted to be from the expected value given by the regression model. If an outlier is measured outside this limit, it is discarded and an artificial measurement will be placed on the current limit. Furthermore maximum and minimum values must be set on the parameters of the regression models or by forming formal inequality constraints on the output of the regression models, to ensure stability.
  • the number of saved samples N used for the estimation of the regression models is determined by the following pros and cons.
  • the major benefits with many samples are robustness against noise and more accurate models, while the benefits with few samples are smaller memory requirements and faster adaptation to structural changes.
  • the time limit T collect between the two samples can be determined.
  • the error tolerance refers here to the error in the collected sample pair due to variation in the map between the two measurements, which is defined by equation (53).
  • a third problem that can lead to incorrect estimation of the regression models due to how the samples are collected is if the measurement error is high. This is especially a problem when estimating models where the measurements occur seldom.
  • Figures 6a-6b show example of possible boundaries for permitted actualization in two dimensional look-up tables, empty nodes start point of actualization, wherein filled nodes represent K ( j 1 , j 2 ) ⁇ T act , and striped nodes represent K ( j 1 , j 2 ) ⁇ T act .
  • a boundary is formed by cross section of the map
  • Fig. 6b shows triangular shaped boundaries.
  • Figure 6a shows an example of this where the boundary is formed by stopping actualization passed the cross section of the map with respect of the newly updated LM.
  • Other geometrical boundaries are possible, e.g. forming a triangular shape from (j 1 , j 2 ) with respect to ( i 1 ,i 2 ) , as shown in Figure 6b .
  • the actualization is done by initializing actualization cycles from the local models ( i 1 , i 2 ), ( i 1 , i 2 +1), ( i 1 +1, i 2 ), ( i 1 +1, i 2 +1) , which are associated with the interpolation area of the operating point.
  • Algorithm 4.5 provides the actualization cycles in an iterative form.
  • Figure 7 depicts an example of the actualization procedure in a two dimensional look-up table. Otherwise the algorithm is straightforwardly derived from the one dimensional case.
  • Figure 7 shows an actualization cycle of LPRM on 2-D look-up tables, with LMs within the interpolation area of the operating point, indicated by circles (o), an outer loop, indicated by filled arrows (__), and an inner loop, indicated by dashed arrows ( _._).
  • the memory requirements for the basic version of the algorithm are two parameters ⁇ 0, ⁇ 1 for each transition between every neighboring LMs. Note that there is one regression model in each direction in every transition, but because the regression models are easily invertible it is sufficient to save one of them. How they are inverted is shown in Algorithm 4.6 below.
  • the algorithm is applied to one-dimensional look-up tables, the number of transitions adds up to ( M- 1 ) and thus 2( M -1) parameters are stored in memory.
  • the number of transitions in the first dimension is ( M 1 -1) M 2 and in the other dimension M 1 ( M 2 -1 ), thus the total number of stored parameters is 2( M 1 ( M 2 -1 ) + ( M 1 -1) M 2 ).
  • the extended version needs some additional stored parameters. First every LM needs to keep track of the time since they last received a measurement sample K ( i ) , thus M additional variables in one dimensional look-up tables and M 1 M 2 in two dimensional tables. When online adaptation of the regression models are incorporated in the algorithm it needs to store 2N samples for each transition, which sums up to 2 N ( M -1) in one dimensional maps and 2N( M 1 (M 2 - 1) + ( M 1 -1) M 2 ) in two dimensional case. The total number of stored variables and constants is given below. In the extended version the regression model parameters can be calculated each time they are needed and thereby saving memory at the expense of computation time. The expressions below give the minimum memory requirement.
  • the initial optimization of the regression models is done by generating samples from two maps, which is the a-priori information for the regression model parameters. These are referred to as the first and second boundary maps y boundary 1 (X), y boundary 2 (X) , They are supposed to enclose a maximum probable variation interval (max ( y 0 ⁇ t ⁇ ( x )),min( y 0 ⁇ t ⁇ ⁇ ( x ))), that the map will have during operation. These maps might be given by the start map and a probable future map measured from e.g. a used engine or just a qualified guess. It is important that the boundary maps approximately enclose the variation interval. Otherwise the accuracy of the artificial samples decrease, which is reflected by larger confidence intervals.
  • boundary maps closely demarcate the interval of the bias variations of the map. If one of the boundary maps is known to be near the center of the probable variation interval of the map, it should be moved to the probable boundary of the interval. In case this is done the details in the map should be linearly extrapolated with respect to the other map.
  • the offline optimization algorithm below generates samples by linear interpolation between the two boundary maps.
  • Each local model is given N samples and these are of course placed in the center of the local models x ( i ) .
  • Example 1 will demonstrate some fundamentals of the nature of LPRMs.
  • Example 2 will evaluate and demonstrate the nature of different adaptive strategies on real world engine maps, and
  • Example 3 evaluates adaptive strategies on drive cycles on real engine maps with different levels of noise.
  • Figure 8b illustrates the end of the first simulation where the second boundary map differs from the first boundary map by having a larger bias term.
  • the result of the second simulation is given in Figure 8c , where the sine function in the second boundary map was multiplied with a larger factor.
  • the last simulation in Figure 8d shows a combination of the two preceding simulations with both a larger bias term and a larger factor multiplied on the sine function.
  • Figures 8b-c contain two boundary maps, where the relatively lower amplitude map is referred to as boundary map 1 and the higher amplitude map is referred to as boundary map 2.
  • the simulations in this example are done on engine maps from the control system of an IC engine.
  • the original maps were of two dimensions and gave the control system an estimated value of volumetric efficiency, from a given engine speed and intake manifold pressure. Here they are projected to one dimensional maps, by giving the manifold pressure a fixed value.
  • the goal of this simulation is to evaluate how different actualization algorithms perform and how they work.
  • the simulations need a drive cycle which should be a model of how the values of the measurement samples of engine speed could vary in a real world drive cycle.
  • the drive cycle should fulfill some requirements. Firstly the speed has inertia, thus the driving cycle should incorporate some dynamics. Secondly the excitation should hover around an engine speed which is common during normal driving conditions and during a short time excite a higher engine speed, which could correspond to an acceleration phase. Thirdly more than one driving cycle should be generated, so that possible flaws can be detected.
  • the drive cycle model is realized by a moving average (MA) time series model, given in equation (55) and the engine speed is accordingly given in equation (56).
  • the MA-process is driven by white normally distributed noise with variance 1.
  • the model is designed to give a simple model, which fulfills the above mentioned requirements. The process reaches the higher engine speed by adding a constant to the input noise during a short time in the drive cycle.
  • the drive cycle ends by disengaging the MA-process and giving the input a stable and low engine speed so that all cycles ends in the same position, for better visual comparison.
  • the driving cycle consist of 500 measurement samples and between samples 50 and 70 the MA-process will be fed with an additional constant in its input noise, resulting in higher engine speed.
  • An example of a realization of the driving cycle is given in Figure 9 and Figure 10 showing the histogram of the drive cycle realization in Figure 8a-d , with 40 containers of samples, which gives an approximate density function of the drive cycle.
  • the following simulations compare the TRT with the LPRM (with re-estimation of the regression models). No measurement noise will be added and the DA will be used as update algorithm.
  • Two different maps will be tested in the simulations.
  • the first map, "Map 1”, is a perfect match with the boundary maps used as a priori-information, as indicated in Figures 11a-d and 12a-d .
  • the other map, "Map 2”, is an alteration of Map 1 which does not fully agree with the boundary maps, as indicated in Figures 13a-d and 14a-d .
  • the real map from which the samples are generated, will rise from a level close to the lower boundary map to a level close to the higher boundary map.
  • the map will then decrease to its initial position during the remaining 200 samples.
  • the movement of the Map 1 is done by linear interpolation between the initial maps using 300 equidistant steps up and 200 equidistant steps down.
  • the movement of Map 2 is done with the same method except that it will be altered from Map 1 by an addition of a sine function.
  • the initial estimated map will have the same values as the lower boundary map.
  • the parameter settings of the simulation are given in the Appendix.
  • Figures 14a-d shows a good illustration of the effects of re-estimation of the regression models.
  • Table 5.1 A sum of the errors between the real map and the estimated map in 200 equidistant points over the maps total range measured after each received sample will be used as a performance measure.
  • the numbers in Table 5.1 show the average of this error sum over five simulations in each given example. In addition to the examples depicted in Figures 11a-d to 14a-d , Table 5.1 also gives the average error sum of the case of no actualization and the LPRM with no re-estimation allowed.
  • the performance measure used here is a bit blunt because it does not weigh the errors after the frequency they are read.
  • the map is more frequently read in normal cruising speeds than at e.g. extremely high engine speeds.
  • a larger radius in the TRT than used in the simulations above gave a slightly smaller error sum. But it also resulted in more significant destructive learning in the range of normal cruising speed. Consequently the radius of the tent was kept smaller then its optimum with respect to the sum of errors.
  • This simulation Example will compare adaptive strategies at different levels of measurement noise on the drive cycle from the previous Example and this will be done on both maps from the same Example. All the actualization methods in Table 5.1 will be evaluated with both RLS and DA as update algorithms and each combination are simulated five times and from that an average error sum is formed. No spatial filtering was done in the simulations.
  • the noise level should also be compared with the change rate of the map and the variation interval of the map.
  • the change rate in the simulations is probably much higher than real world applications with slow variations.
  • the map sweeps the interval between the two boundary maps in just 500 steps. This situation should give benefits to DA, with its high plasticity. Which is verified in the simulations where DA performs surprisingly well compared to RLS, where the forgetting factor was set to the relatively low value 0.9. Merely at very high noise levels does RLS typically perform better than DA.
  • the simulations also showed that high spatial frequency can appear with the RLS if the initial variance elements are large, thus they should be given relatively small values.
  • Figure 15 shows a schematic illustration of a vehicle comprising a combustion engine (E), a driveline (D) and an electronic control unit (ECU) for controlling said combustion engine (E) and said driveline (D).
  • Sensors (10, 11, 12) for measuring at least one engine or driveline parameter are connected to the electronic control unit (ECU) in order to provide measured samples.
  • the electronic control unit (ECU) is provided with maps of measured or estimated samples for at least one of the said engine or driveline parameters. The maps are adapted using the method described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Mechanical Engineering (AREA)
  • General Engineering & Computer Science (AREA)
  • Combined Controls Of Internal Combustion Engines (AREA)

Abstract

The invention relates to a method for adapting a combustion engine control map, which map comprises a set of nodes where each node is represented by a local model. The method involves the steps of receiving a measured sample of an engine parameter at an operating point, generating artificial samples in the coordinates of local models located adjacent the operating point, and updating the local models on both sides of the operating point using an update algorithm. This allows an engine control map to be updated over a relatively large area or region.

Description

    TECHNICAL FIELD
  • According to a preferred embodiment, the invention relates to a method for adapting a combustion engine control map, which map comprises a set of nodes where each node is represented by a local model. Algorithms may be used for generating artificial samples and updating the engine control map based on a measured sample of an engine parameter.
  • BACKGROUND ART
  • The automotive industry is given ever increasing demands to lower emissions and fuel consumption on the vehicles which they produce. When these demands continue to grow it becomes both cost effective and necessary to improve common engineering solutions. One important means to lower emissions and fuel consumption is to improve the control system of the engines.
  • Contemporary engine control systems contain a considerable amount of static maps. The maps are linear or nonlinear functions of one or several variables, which often describe a physical phenomenon or a function with no apparent physical interpretation used by the control system. The static maps are used by e.g. nonlinear controllers and static feed forward controllers. Some static maps are adapted online with some optimization algorithm acting on an incoming sample. These samples are referred to as measurement samples, although they might not be generated by a physical measurement. The reasons for the need of online adaptation are manifold. Three important reasons are aging of engines, mechanical differences between engines, and that the map can be dependent of many variables which are not practically possible to include as input variables. The inclusion of many variables is not practical partly due to an undesired necessity for high dimensional maps and partly due to uncertainty of how the variables influence the map.
  • The object of the invention is to improve the adaptation process to achieve more accurate maps. Improvements of adaptive static maps in accordance with the invention will in turn result in an improvement of the control system and consequently lowered emissions and fuel consumption.
  • DISCLOSURE OF INVENTION
  • The above problems are solved by a method according to claim 1 and a vehicle comprising an electronic control unit for implementing said method, according to claim 19.
  • According to a preferred embodiment, the invention relates to a method for adapting a combustion engine or driveline control map, which map is expressed by a basis function equation. The method involves the steps of:
    • receiving a measured sample of an engine parameter at an operating point,
    • updating local models adjacent of the operating point using an update algorithm,
    • generating artificial samples in coordinates of local models located remote from the operating point and the said adjacent local models, and
    • updating the said remote local models using an update algorithm.
  • Artificial samples are generated in coordinates of local models in a map expressed by a basis function equation, which is described in connection with Equation 1 below.
  • In a preferred embodiment, generating artificial samples can be done using an actualizing algorithm, such as a local pattern regression model (LPRM).
  • According to a preferred example of this embodiment, the method involves updating the local models in a map represented by a look-up table. The local models can be updated using a recursive least squares (RLS) algorithm, a direct adjustment (DA) algorithm, a least means squares (NLMS) or a normalized lest means squares (LMS) algorithm
  • According to a further preferred example of the embodiment, the method involves updating the local models in a map represented by a local linear neuro-fuzzy model (LLNFM). In this case the coordinates of a local models i (LM i ) is simply given by the center of the validity function of such a local linear model. The local models can be updated using a recursive least squares (RLS) algorithm, a least means squares (NLMS) or a normalized lest means squares (LMS) algorithm.
  • In an alternative embodiment, generating artificial samples can be done using a tent roof tensioning (TRT) algorithm.
  • According to a first example of this embodiment, the method involves updating the local models in a map represented by a local linear neuro-fuzzy model (LLNFM). The local models can be updated using a recursive least squares a (RLS) algorithm, a least means squares (NLMS) or a normalized lest means squares (LMS) algorithm.
  • According to a further example of this embodiment, the method involves updating the local models in a map represented by a look-up table. The local models can be updated using a recursive least squares (RLS) algorithm, The local models can be updated a least means squares (NLMS) or a normalized lest means squares (LMS) algorithm.
  • In addition, the invention relates to a vehicle comprising an electronic control unit for controlling a combustion engine or a vehicle driveline and sensors for measuring at least one engine or driveline related parameter, where the electronic control unit is provided with a map of measured or estimated samples for the said at least one parameter. The map provided in the electronic control unit is adapted using the above method.
  • The invention also relates to a combustion engine comprising an electronic control unit for controlling said combustion engine and sensors for measuring at least one engine parameter. The electronic control unit may be provided with a map of measured or estimated samples for at least one of the said engine parameters. The map or maps may be adapted using the method described above.
  • Static maps with online adaptation will be viewed in a comprehensive perspective and thus referred to as adaptive static maps, where different components of the maps are reviewed from the perspective to design better adaptive maps.
  • 1 GENERAL BACKGROUND
  • Three major components can be distinguished in the design of adaptive static maps, these are given below.
    • Map representation
    • Update algorithm (local adaptation)
    • Actualization (spreading)
  • By static maps are meant nonlinear memory less functions that perform a mapping from input space X to output space Y, i.e. (x)X→Y, where XRD , YR and D denotes the dimension of the input space and the map. Moreover no extrapolation outside the defined input space is considered. The output space is always one-dimensional whereas the dimension of the input space is arbitrary, though only one and two dimensions will be considered here.
  • Some general theory on which nonlinear static maps rely is introduced below. As mentioned above the map will receive measurement samples during operation. The measurement samples consist of input output pairs (xi ,yi ) and the learning algorithm has no control of the operating point of these, thus passive learning. The input coordinate of the latest samples is also referred to as operating point ξ =xi . The process which adapts the map with regard to these samples is referred to as online adaptation. Initial parameter optimization and online adaptation is distinguished by the terms optimization and adaptation, respectively.
  • The measurement samples can be received in three different ways with respect to time:
    1. 1. Periodically incoming samples
    2. 2. Periodically incoming samples except for non-steady state situations
    3. 3. Sporadically incoming samples
  • Some of the algorithms must be adjusted depending on which of these sampling situations is given by the application. Here the second type of sampling situation will be considered.
  • The online adaptation with respect to these samples is done by an update algorithm. In most map architectures the update algorithm acts locally on the map. Hence update algorithms applied on measurement samples will only adjust the map in a small region around the sample. This problem is aggravated by the fact that in many applications the measurement samples are rarely or never received in large areas of the input space. Hence one problem to be solved by the invention relates to development of adaptive static maps which adapt larger areas of the map than the known methods which only update the map locally around the measurement samples.
  • The primary method that will be considered here for solving this problem is by employing actualization algorithms, which spread the adjustment to larger regions of the map. Hence the main purpose of the invention is to provide improved actualization algorithms. The actualization algorithms should fulfil the following requirements:
    • Spread adjustments to larger areas of the map
    • To integrate some form of a priori-information of possible variations in the map
    • The algorithm should not depend explicitly on the form of the map
    • It should not spread information to undesired regions of the map
    • Reasonable memory and computation time requirements
    • It should be scalable
    • It must remain stable, i.e. the map must remain bounded
  • In the description below, measured and estimated maps are distinguished by denoting the measured map with y(x) and the estimated map with (x). It is also assumed that the measured map y(x) includes additive noise, i.e. y = y u + n, where the noise free measured map is denoted with yu (x) . Hence the goal of a map is to be as close to yu (x) as possible.
  • Two major time variables are used; t represents a continuous time variable while k represents a discrete time variable that counts the order of the incoming samples. Variables and functions that are indexed or followed by brackets with k or t always refer to time, discrete and continuous respectively. Indexation with i or j refer to distinct variables, functions or values.
  • A list of abbreviated terms, constants, functions, sets and variables used in this text are given under the section "Notations" below.
  • 2 - MAP REPRESENTATION
  • Static non-linear functions can be represented with many different model architectures. This chapter describes two different architectures which are suitable for adaptive static maps. The first is classic look-up tables and the other is called local linear neuro-fuzzy models (LLNFM) which is a modern architecture based on fuzzy logic.
  • Many map representations can be written on the basis function framework indicated in equation (1), where the output is a sum of basis functions Φ i (x i (nl)) weighted with functions Li (x i ) which are linear in its parameters θ , these are referred to as linear functions. The values of the basis functions are determined by the input x and its parameters θ i nl .
    Figure imgb0001
    The basis functions are in general nonlinear in its parameters θ i nl .
    Figure imgb0002
    Various map representations differ from each other by having different types of linear models and basis functions. The basis function is often written in an abbreviated form without the parameters. With this framework it is clear that the parameters of nonlinear map representations can be separated in two categories; the basis function parameters θ (nl) and the linear model parameters θ. This convenient fact is made use of in the optimization and adaptation of the map. The map is expressed by a basis function equation y ^ x = L M L i x θ i Φ i x θ i nl
    Figure imgb0003

    where
    θ i  is a height parameter of node i,
    Li (x,θi )  is a linear function in the basis function framework,
    Φ i x θ i nl
    Figure imgb0004
      is a basis function in the basis function framework, where (nl) indicates that the function is non-linear,
    M  is the number of nodes in a one-dimensional map or a LLNFM of arbitrary dimension
  • Perhaps the most important performance measure of a map representation is how well it approximates the measured map. The measure is defined as model error and is given in equation (2), where yu represents a noise free measured map. This measure can be estimated from the training data used for the optimization of the map, which will be discussed next. The measurement samples which are used for estimation of the map are assumed to have additive noise n, with variance σ n , i.e. y = yu + n . The model error can further be decomposed into bias error and variance error according to equation (3). This is described in Nelles, O. (2001). Nonlinear System Identification. Springer-Verlag, Berlin Heidelberg, hereinafter referred to as [Nelles]. e model = y n x - y ^ x
    Figure imgb0005
    e model 2 = e bias 2 + e var
    Figure imgb0006
  • The bias error is due to the inflexibility of the model. The flexibility of the model is determined by how well the structure of the map representation describes the measured map and it grows with the number of parameters in the model. The variance error arises from having parameter values which deviate from their optimal values. Variance error is reduced by having a large number of training data S and minimizing the variance σ n of the noise in the training samples, furthermore the variance error increases with the number of parameters in the model. In equation (4) a general approximate relation of how these variables influence the variance error is given, this hold generally regardless of map representation as described in [Nelles]. e var σ n 2 # parameters S
    Figure imgb0007
  • Thus determining the number of parameters in a model is based on a trade-off between the bias and variance error. This trade-off will not be addressed in further detail here. It is though important to realize what can be accomplished with online adaptation. Every adaptation algorithm that is considered here will only change the values of the linear model parameters in the map representation of equation (1). Hence the minimization of the variance error with respect to the parameters θ of the linear function is the conceivable overall goal of the adaptation algorithms. Variance error is from here on always considered with respect to the estimation of the linear model parameters.
  • Deriving an analytical expression of the variance error of the model output (x) due to the error in the estimation of the linear model parameters θ is done by taking the covariance of the output of equation (5), where each diagonal element gives the variance error of the model output at every sample (xi , yi ) used in the estimation of the parameters θ [Nelles]. Note that X is the regressors used for the estimation of the parameters θ and that they are general regressors and are unique for the specific map architectures. The size of the diagonal elements of cov{θ} gives the variance error of the parameters of the linear function. e var = cov y ^ = E y ^ - E y ^ y ^ - E y ^ T = E X θ - E 0 X θ - E 0 T = X cov θ X T
    Figure imgb0008
  • 2.1 Look-Up Tables
  • The most widely used map representation in industrial applications is look-up tables [Nelles]. Look-up tables consist of interpolation nodes which are distributed on a grid and are located by a coordinate c and each node is associated with a height θ. The output of a look-up table is given by interpolation between the nodes which spatially surrounds the input coordinate in each dimension and their belonging heights. Linear interpolation is used in the one dimensional case, see equation (6) and Figure 1a-b, whereas area (bilinear) interpolation is used in two dimensional maps, see equation (7) and Figure 2, where the areas are given by equations (8)-(12) y ^ x = θ i c i + 1 - x c i + 1 - c i + θ i + 1 x - c i c i + 1 - c i
    Figure imgb0009
    y ^ x = θ i , j A i + 1 , j + 1 A + θ i + 1 , j A i , j + 1 A + θ i , j + 1 A i + 1 , j A + θ i + 1 , j + 1 A i , j A
    Figure imgb0010
    A i + 1 , j + 1 = c 1 , i + 1 - x 1 c 2 , j + 1 - x 2
    Figure imgb0011
    A i , j + 1 = x 1 - c 1 , i c 2 , j + 1 - x 2
    Figure imgb0012
    A i + 1 , j = c 1 , i + 1 - x 1 x 2 - c 2 , j
    Figure imgb0013
    A i , j = x 1 - c 1 , i x 2 - c 2 , j
    Figure imgb0014
    A = c 1 , i + 1 - c 1 , i c 2 , j + 1 - c 2 , j
    Figure imgb0015
  • Look-up tables can be written on the basis function framework. The height parameters correspond to the linear functions and the spatial interpolation between the nodes corresponds to the basis functions. Only the basis functions Φ i,j . that are within the active interpolation area are non-zero, see equation (13) where uniformly two dimensional look-up tables are assumed.
  • Compare equation (7) above and the formal basis function given by equation (14), which both describes the output of a two dimensional look-up table. Φ i , j x = 1 , i , j i , j : c 1 , i - x 1 c 1 , 2 - c 1 , 1 c 2 , j - x 2 c 2 , 2 - c 2 , 1 Φ i , j x = 0 , else
    Figure imgb0016
    y ^ x = i = 1 M 1 j = 1 M 2 θ i , j Φ i , j x c 1 , i c 2 , j
    Figure imgb0017
  • The nodes are usually uniformly distributed in the input space, but it is also possible to have a non-uniform distribution [Nelles]. From here on, when referring to look-up tables, a uniform distribution is assumed. It is also assumed that all nodes are fixed a priori. The optimization of the location and number of nodes will not be addressed here.
  • Determining the initial heights θ of the map is a pure linear optimization task. It is done by minimizing the sum of squared errors in equation (15) over all measurement samples S and solved by the least squares algorithm of equation (16) [Nelles]. i = 1 S y i - y ^ i 2
    Figure imgb0018
    θ = X T X - 1 X T y
    Figure imgb0019
  • Where X = Φ 1 x 1 Φ 2 x 1 Φ M x 1 Φ 1 x 2 Φ 2 x 2 Φ 1 x S Φ 2 x S Φ M x S , y = y 1 y 2 y S , θ = θ 1 θ 2 θ M
    Figure imgb0020
  • In the beginning of the current section a general expression in equation (5) of the variance error of the output due to the estimation of the parameters θ in the linear function was given. By applying this expression to the output of a look-up table where linear parameters θ are estimated by equation (16), the following expression appears in equation (18). e var = X cov θ X T = X E X T X - 1 X T y - E y X T X - 1 X T y - E y 2 X T = X X T X - 1 X T E n E n T X X T X - 1 X T = X X T X - 1 X T E n n T X X T X - 1 X T = X σ n 2 X T X - 1 X T
    Figure imgb0021
  • Notice that y-E{yu +n}=y-yu =n and in the two last equalities it is assumed that noise is white which implies E{n}(E{n}) =E{nnT } and E n n T = σ n 2 I .
    Figure imgb0022
    The diagonal in the last expression within the outermost brackets of equation (18) gives the variance error of the estimated parameters θ .
  • In Nelles several properties of look-up tables are given, some of the more relevant are stated here. Three positive important benefits of using look-up tables are that they have high evaluation speed, the parameters of the linear models can be optimized fast, and they are simple to implement. Two negative properties are non-smoothness and that they suffer severely from the curse of dimensionality. With curse of dimensionality is meant that the memory requirements grow fast with the dimension of the map.
  • 2.2 Local Linear Neuro-Fuzzy Models
  • This map representation has been developed in parallel in various scientific fields [Nelles] and is generally less well-known than look-up tables. Readers who are familiar with fuzzy logic will recognize that local linear neuro fuzzy models (LLNFM) are equivalent to first order Takagi-Sugeno with axis-orthogonal Gaussian membership functions and product operator used for conjunction [Nelles]. In order to clarify some concepts relating to fuzzy logic, the LLNFMs are described in detail below.
  • LLNFMs can also be written on the basis function framework given above. The basis functions in LLNFMs are normalized Gaussian functions, see equation (19). These Gaussian functions are also referred to as validity functions, because they give the size of how much their respective local models are affecting the output. By normalized are meant that the sum of all M basis functions always ad up to one, see equation (20). It is also assumed that the Gaussian functions are axis orthogonal, i.e. the parameters which determine the width and position of the function in each dimension are independent of each other. Φ i x = μ i x j = 1 M μ j x , μ i x = exp - 1 2 x 1 - c i 1 σ i 1 2 + + x D - c iD σ iD 2
    Figure imgb0023
    i = 1 M Φ i x = 1 , x
    Figure imgb0024
  • Here ci,j is the center position of the validity function i in dimension j and σ i,j determines the width of validity function i in dimension j.
  • The linear functions in the basis function framework are here made up of local linear models (LLM) with respect to the input coordinate x, see equation (21). By forming the basis function representation, the output of the LLNFM results in equation (22). The Gaussian validity functions are always non-zero, thus all the LLMs always contribute to the output, although in varying degree with respect to the input coordinate. L i x θ i = θ i 0 + θ i 1 x 1 + + θ iD x D
    Figure imgb0025
    y ^ x = i = 1 M θ i 0 + θ i 1 x i 1 + + θ iD x iD Φ i x c i σ i
    Figure imgb0026
  • In the general basis function framework, the model parameters can be divided into two categories; those in the basis function and those in the linear function. The parameters in the linear function are clearly the LLM parameters θ i,j. while the parameters in the basis function are the center positions ci,j of the validity functions and their widths σ i,j, . This is advantageous because if the validity functions are specified, which is done by ci,j and σ i,j , the linear model parameters are easily optimized with least squares.
  • There are two different approaches for the optimization of the linear model parameters, a global and a local. The global approach optimizes all linear model parameters with respect to all the measurement samples simultaneously. The second approach neglects the overlapping of the validity functions and optimizes the local linear models separately. In [Nelles] the two approaches are compared; the global approach has a smaller bias error while the local has a smaller variance error and it also has lower computational complexity, O(MD 3) compared to O(M 3 D 3) , and is more robust against noise. The same source states that the local approach is superior to the global variant in most applications, consequently local optimization will be considered from here on. The global approach is particularly unsuitable for the online adaptation, which is the focus here; because of its high computational complexity and its less robust behavior (more about online adaptation of LLNFM see Section 33). The local estimation is derived from the loss function in equation (23) for LLM i over all S sample pairs(xj,yj ). Note that the errors are weighted with the current validity functions. This optimization problem is solved with weighted least squares, in equation (24), where the parameter vector, weight matrix, and regressor matrix are given by equation (25). Observe that the regressor matrices are independent of i, since all data samples are used in the estimation of every LLM. J i = j = 1 S Φ i x j y j - 1 x j 1 x jD θ j 0 θ j 1 θ jD 2
    Figure imgb0027
    θ i = X T Q i X - 1 X T Q i y
    Figure imgb0028
    Q i = Φ i x 1 0 0 0 Φ i x 2 0 0 0 0 Φ i x S , X = 1 x 11 x 1 D 1 x 21 x 2 D 1 x S 1 x SD , y = y 1 y 2 y S , θ i = θ i 0 θ i 1 θ iD
    Figure imgb0029
  • If the widths σ i,j of the validity functions are small, the map will have small transition phases between the LLMs and thus less smooth steps occur in the map. On the other hand if they are large, the function will loose local accuracy. The center coordinates must be distributed by a structure optimization algorithm. A fast algorithm for this task is presented in [Nelles] which is called Local Linear Model Tree (LOLIMOT). It optimizes the structure of the map by clustering the input space incrementally. Because the emphasis here is placed on online adaptation, where only the linear model parameters are adapted, the structure optimization will not be pursued in further detail.
  • From the general expression of the variance error of the model output given in equation (5), an expression for LLNFM follows equation (26), where it is assumed that the estimation is done with the local method given above [Nelles]. The parameter variance error for each LLM is given in equation (27), where the two last equalities are based on the assumption of white noise. The regressor and weight matrix are given by equation (25). e var = cov y ^ = i = 1 M Q i cov y ^ i = i = 1 M Q i X cov θ i X T
    Figure imgb0030
    cov θ i = E X T Q i X - 1 X T Q i y - E y X T Q i X - 1 X T Q i y - E y T = X T Q i X - 1 X T Q i E n E n T Q i X X T Q i X - 1 = X T Q i X - 1 X T Q i E n n T Q i X X T Q i X - 1 = σ n 2 X T Q i X - 1 X T Q i Q i X X T Q i X - 1
    Figure imgb0031
  • Some relevant beneficial properties of LLNFM are fast linear model parameter optimization, easily controlled smoothness of the map, and the curse of dimensionality is low [Nelles]. Hence in applications with high dimensional maps the LLNFM is a better choice than look-up table if memory requirements are crucial. One relevant negative property is that they only have medium evaluation speed [Nelles]. Look-up tables are by far more frequently used in applications, thus look-up tables will be the most considered map representation in the remaining chapters.
  • 3 - UPDATE ALGORITHMS
  • In Section 2 above it was explained that the parameters in the map representations can be separated in two categories; basis function parameters and linear model parameters. Due to this fact and the fact that linear optimization methods are robust, fast, and easy to implement, only the linear model parameters will be adapted during online operation. This further implies that only the variance error of the map with respect to the linear model parameters is reduced by the online adaptation.
  • When look-up tables are updated the heights of the two most adjacent nodes in each dimension will change their values, see also Figures 1a, 1b and 2. When LLNFMs are updated only the linear model parameters of those LLMs which have a valid function larger than a given threshold in the current operating point of the sample are updated, i.e. Φi (x) > Φ thr .
  • 3.1 Direct Adjustment
  • The most straightforward way to update a look-up table is to change the values of the height parameters so that the map has the same value in the operating point x=ξ as the new measurement, according to equation (28).
  • This update algorithm is most often used in applications with look-up tables [Nelles]. Here the height parameters of the current interpolation area are given the same value as the new sample. Note however that when samples are received precisely on the coordinate of a height parameter; only that parameter is updated. This feature is used for the actualization algorithms presented in section 4. y ^ k + 1 x i = { y k ξ , for x i = ξ y ^ k x i for x i ξ
    Figure imgb0032
  • According to the stabilitylplasticity dilemma there is a tradeoff in learning systems between the speed of the adaptation ("plasticity") and the ability of good noise attenuation ("stability") [Nelles]. In the direct adjustment (DA) algorithm the plasticity is maximized and the stability is minimized, because all the earlier measurements are discarded while merely the newest sample determines the current estimate. The estimated variance error of a look-up table in the coordinate of the latest measurement sample (xi ,yi ) in the immediate time after the update with DA, is simply the variance of the measurement noise of the sample, i.e. cov y ^ k + 1 x i = σ i 2 .
    Figure imgb0033
    Hence the algorithm is unsuitable in applications where the noise level is significant. Furthermore when actualization methods are applied, the error in the measurement will be spread to large areas of the map. However, its evident advantages are extreme low computational requirements and simplicity.
  • 3.2 Normalized Least Mean Squares
  • The most commonly used update algorithm for online adaptation generally, is the normalized least mean squares (NLMS) algorithm. This is described in Vogt, M., Müller, N., and lsermann, R. (2004). On-Line Adaptation of Grid-Based Look-up Tables Using a Fast Linear Regression Technique. Journal of Dynamic Systems, Measurement, and Control, December 2004, Vol. 126, hereinafter referred to as [Vogt et al., 2004]. It is a linear first order optimization method and it has very low computational requirements [Nelles]. The update algorithm given in equation (29) is applied on the height parameter of node i in a one dimensional look-up table, where the learning rate η must be set within 0 < η < 2 [Vogt et al., 2004]. If the measurement sample is located between node i and i+1, in one dimensional look-up tables, both node i and i+1 is updated according to equation (29). This is done analogously in two dimensional look-up tables, where the four nodes which belong to the current interpolation area are updated. If the denominator is omitted the algorithm is simply called least mean squares. θ i k + 1 = θ i k + ηε k Φ i x k j = 1 M Φ j 2 x k , ε k = y k ξ - y ^ k ξ , x k = ξ
    Figure imgb0034
  • 3.3 Recursive Least Squares 3.3.1 Description
  • The recursive least squares (RLS) algorithm is a linear second order optimization method [Nelles]. It is summarized in (30)-(32), where X and θ are general regressors and parameters respectively. Its time complexity is of the order O(#parameters 2) and it is also extended with forgetting factor λ, and weight factor w [Nelles]. θ k = θ k - 1 + γ k ε k , ε k = y k ξ - X T k θ k - 1
    Figure imgb0035
    γ k = 1 X T k P k - 1 X k + λ / w k P k - 1 X k
    Figure imgb0036
    P k = 1 λ I - γ k X T k P k - 1
    Figure imgb0037
  • The forgetting factory λ enables the algorithm to follow time-variant processes. The value of it is determined by the stability/plasticity trade-off; good noise attenuation (large λ) versus fast learning (small λ). Usually the value is set between 0.9 and 1. As mentioned above the algorithm also has a weight factor which determines how much a sample should influence the estimation.
  • If the algorithm is given many samples in the same operating point, equation (32) will be reduced to approximately equation (33). This is described in Åström, K.J. and Wittenmark, B. (1989). Adaptive Control. Addison-Wesley Publishing Company, hereinafter referred to as [Åström and Wittenmark, 1989]. This makes the covariance matrix P grow with an exponential rate. A simple way to overcome this problem is to have a restriction in the algorithm which stops it when the residual ε or PX becomes smaller than a dead-zone [Åström and Wittenmark, 1989]. P k = P k - 1 / λ
    Figure imgb0038
  • 3.3.2 Implementation
  • In [Vogt et al., 2004] it is shown that the RLS (the modified version presented below) converges faster than NLMS while the memory requirements are only twice as high when it is applied on look-up tables. Moreover the RLS has a convenient way of weighing the leverage of the samples in the estimation which can be used in actualization algorithms. Consequently RLS and DA will be the standard updating algorithms from here on. Next, two ways of how the RLS can be implemented in look-up tables and LLNFMs will be described.
  • An approach of how the RLS can be implemented on look-up tables is given in [Vogt et al., 2004], called Modified RLS, where a two dimensional map is considered. It is assumed that a sample only affects the surrounding four nodes within the current interpolation area, see equations (34)-(38) and Figure 2. θ ˜ = θ i , j θ i , j + 1 θ i + 1 , j θ i + 1 , j + 1 T
    Figure imgb0039
    θ ˜ k = θ ˜ k - 1 + γ k ε k , ε k = y k x - X T k θ ˜ k - 1
    Figure imgb0040
    γ k = 1 X T k P k - 1 k + λ / w k P k - 1 X k
    Figure imgb0041
    P k = 1 λ I - γ k X T k P k - 1
    Figure imgb0042
    X k = Φ i , j x k , Φ i , j + 1 x k , Φ i + 1 , j x k , Φ i + 1 , j + 1 x k
    Figure imgb0043
  • This continues until measurements are received in another interpolation area.
  • Then the diagonal elements in the covariance-matrix P are stored in a variance matrix V in memory. The new diagonal elements in the P -matrix for the new interpolation area are given by the stored variance matrix, while the covariance elements (non-diagonal) are set to zero. Initially the variance elements can be given relatively large values (100-1000) which will give a fast initial convergence. High variance values indicate uncertain values. The flow chart of the modified RLS algorithm is depicted in Figure 3.
  • LLNFM. As mentioned in Section 22, the optimization of the linear model parameters is done locally, i.e. the local linear models are adapted one at a time. But contrary to the offline optimization which optimizes all LLM to every sample, the online adaptation adapts only those LLM whose validity function is larger than a threshold Φ i (x) > Φ thr in the operating point of each sample.
  • The choice of this strategy is based on two arguments. The first is obviously a lower computational demand and the other is to ensure robustness to insufficient excitation of the map. Otherwise, if the samples are not uniformly distributed to all LLM, the heights of the non-excited LLM will converge to the height of the incoming samples despite the small size of their validity functions in the operating point, and thereby causing destructive learning of the non-excited LLM. Non-uniform distribution of the samples is an assumption in the problem description here, see Section Therefore Φthr should be large enough so that besides the most active LLM, at most its neighboring LLM are updated. If no separate actualization algorithm is used, it may be advantageous to have a Φ thr small enough so that the neighboring LLM are updated, which will give a local actualization, this is referred to as broad updating. But when separate actualization algorithms are used, only the most active LLM is updated, to avoid a mixture of two different actualizations.
  • The RLS algorithm applied on a one dimensional LLNFM with the local adaptation strategy described above is given in equations (39)-(43). Note that the regressor has ones in the left column due to the bias parameter; also note that the samples are weighted with their validity functions. The covariance matrix for every LLM is always kept in memory. θ k = θ k - 1 + γ k ε k , ε k = y k k - X T k θ k - 1
    Figure imgb0044
    γ k = 1 X T k P k - 1 X k + λ / Φ i x k , C P k - 1 X k
    Figure imgb0045
    P k = 1 λ I - γ k X T k P k - 1
    Figure imgb0046
    θ = θ i 0 θ i 1
    Figure imgb0047
    X k = 1 , x k
    Figure imgb0048
  • BRIEF DESCRIPTION OF DRAWINGS
  • In the following text, the invention will be described in detail, in some cases with reference to the attached drawings. These schematic drawings are used for illustration only and do not in any way limit the scope of the invention. In the drawings:
  • Figure 1 a
    shows a linear interpolation between two nodes in a one dimensional look-up table;
    Figure 1b
    shows the basis functions of the two current nodes;
    Figure 2.
    shows an interpolation area of a two dimensional look-up table;
    Figure 3.
    shows a flow chart of the modified RLS algorithm;
    Figure 4.
    shows tent roof tensioning with DA on a look-up table;
    Figure 5a-5e
    show examples of LPRM, artificial samples;
    Figure 6a-6b
    show example of possible boundaries for permitted actualization in two dimensional look-up tables;
    Figure 7
    shows an actualization cycle of LPRM on 2-D look-up tables;
    Figure 8a-d
    shows a common start map (Fig.8a) and the result of three simulations using this start map;
    Figure 9
    shows an example of a realization of the driving cycle;
    Figure 10
    shows a histogram of the drive cycle realization in Fig. 9;
    Figure 11 a-d
    show snapshots of simulations of a first map with TRT, which are taken at the first sample k=1; during the short period of higher engine speed k=60; the highest point of the map k=300 and the last sample k=500;
    Figure 12a-d
    show snapshots of simulations of a first map with LPRM, which are taken at the first sample k=1; during the short period of higher engine speed k=60; the highest point of the map k=300 and the last sample k=500;
    Figure 13a-d
    show snapshots of simulations of a first map with TRT, which are taken at the first sample k=1; during the short period of higher engine speed k=60; the highest point of the map k=300 and the last sample k=500;
    Figure 14a-d
    show snapshots of simulations of a first map with LPRM, which are taken at the first sample k=1; during the short period of higher engine speed k=60; the highest point of the map k=300 and the last sample k=500; and
    Figure 15
    shows a schematic illustration of a combustion engine comprising a control unit provided with a map on which the method can be implemented.
    EMBODIMENTS OF THE INVENTION
  • The online adaptation with respect to periodically incoming samples is done by an update algorithm. In most map architectures the update algorithm acts locally on the map. As stated above, update algorithms applied on measurement samples will only adjust the map in a small region around the sample. This problem is aggravated by the fact that in many applications the measurement samples are rarely or never received in large areas of the input space. Hence one problem to be solved by the invention relates to development of adaptive static maps which adapt larger areas of the map than the known methods which only update the map locally around the measurement samples.
  • The primary method that will be considered here for solving this problem is by employing actualization algorithms, which spread the adjustment to larger regions of the map. Hence the purpose of the invention is to provide improved actualization algorithms. The actualization algorithms should fulfil the following requirements:
    • Spread adjustments to larger areas of the map
    • To integrate some form of a priori-information of possible variations in the map
    • The algorithm should not depend explicitly on the form of the map
    • It should not spread information to undesired regions of the map
    • Reasonable memory and computation time requirements
    • It should be scalable
    • It must remain stable, i.e. the map must remain bounded
    4 - ACTUALIZATION ALGORITHMS
  • If merely an update algorithm is used in online adaptation, the map will only be adapted locally where the new measurement samples are received. Therefore actualization algorithms should be employed. Section 41 describes the tent roof tension (TRT) algorithm which was presented in Heiss, M. (1997). Online Learning or Tracking of Discrete Input-Output Maps. IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS-PART A: SYSTEMS AND HUMANS, VOL. 27, NO. 5, September 1997, hereinafter referred to as [Heiss, 1997] and suggest a modified version which is applicable to the current case, while Section 42 will introduce a new algorithm for spreading adjustments to larger portions of the map. For simplicity the algorithms are presented in their one-dimensional versions (D=1), while the two-dimensional cases are commented at the end.
    As mentioned in Section 1.2 a framework has been developed here that gives a clear demarcation between the map representation, the update algorithm, and the actualization algorithm. This is made possible by having the actualization algorithms realize the actualization over the map by placing artificial samples ay (i) in the coordinates of general local models (LM), independent of map representation. The LMs are thereafter updated with the preferred update algorithm. When referring to specific local models explicitly, the indexation is written at the upper right part of the variable or function in brackets e.g. (i) or x (i). The value of a local model (i) is defined as the map value in the coordinated x (i) of the local model. Hence artificial samples are always generated in the coordinates of the local models. This further implies that the information in the actualization is an estimate of the map value in the coordinates of the LMs and no information is given of how the details of the map around the coordinates of the LMs should look like. Therefore these details should be kept unchanged if possible. Thanks to this framework the actualization algorithms are described in general terms and are applicable on both map representations presented above and compatible with the update algorithms given in Section 2. To simplify notation the update of a general update algorithm is abbreviated with an operator (44), indicated below, where the operator acts on the sample yk (ξ) and k which signifies all the information of the estimated map, that is, both the map and the variance/covariance matrices stored in memory. y ^ k + 1 = u y ^ k , y k ξ
    Figure imgb0049
  • If look-up tables are used as map representation, the general LMs refer to the individual nodes in the map. Note that when DA is used, the update of a measurement sample will adjust the two surrounding LMs in each dimension, while artificial samples only affect the LM on which the sample is placed. This holds for the modified RLS as well. However, when an artificial sample in say LM i is updated with the RLS, the height parameter of the neighboring node i+1 (or i-1) will not change its value, but its belonging variance value in V do change in the update of the P-matrix. Hence the variance value of i+1 (or i-1 ) should be kept unchanged in the update.
  • The LLNFM architecture is based on local linear models (LLM), thus the general LMs simply refer to these given by the architecture. The coordinate of LM i is simply given by the center of its validity function, i.e. x (i) = ci .
  • Following the same line of thought as for look-up tables, the artificial samples should only change the bias parameter of the actualized LLM and the rake parameter should be unchanged. If the RLS is used as update algorithm, only the bias parameter and its variance value should be changed. The other values; the rake parameter and its variance value and also the non-diagonal covariance values in the P-matrix should be kept unchanged in the update.
  • With this framework established, where the actualization algorithms generate artificial samples, a central question arises; how the actualization algorithms should estimate the values of the artificial samples. The remaining of this chapter will show how the two described actualization algorithms solve this problem quite differently.
  • 4.1 Tent Roof Tensioning 4.1.1 Original Version
  • The algorithm was given in [Heiss, 1997] where it is implemented on a look-up table with discrete input space i.e. X
    Figure imgb0050
    D . Thus no interpolation is done between the nodes, each allowed input is associated with a single height and the update is done with simple direct adjustment (DA). The algorithm begins by updating the current operating point x = ξ of the map. Thereafter the surrounding points which are within the distance r from the updated point will be adapted. The adaptation is based on a linear interpolation between the recently updated point k+1(ξ) and the points which are located at the distance r from the former. The linear interpolation gives the appearance of a tent roof with its center in the operating point (ξ); which has given the algorithm its name. The algorithm is summarized in Algorithm 4.1.
  • Algorithm 4.1: Tent Roof Tensioning
  • y ^ k + 1 ξ = y ξ y ^ k + 1 x i = y ξ + y ^ k ξ + 1 - y ξ r x i - ξ if 0 < x i - ξ < r y ^ k + 1 x i = y ξ + y ^ k ξ - 1 - y ξ r ξ - x i , if 0 < ξ - x i < r y ^ k + 1 x i = y ^ k x i , else
    Figure imgb0051
  • A problem can occur in case the operating point x = ξ is located near a boundary of the map and the point ξ ± r ∉ X is outside the map. This can be solved by horizontal extrapolation of the end points, i.e. if ξ + r> M then (ξ+r)= (M) and analogously when x<1 [Heiss, 1997]. Additionally [Heiss, 1997] gave an analytical proof which guarantees that the algorithm is stable and that it converges to a small error band around the measured map.
  • 4.1.2 Modified Version
  • As recently mentioned the algorithm was in its original version given with the DA as its update algorithm and a discrete input space. This section gives a suggestion of how the algorithm can be modified to the framework of generating artificial samples in the coordinates of LMs, which was presented above. This will make it applicable on both LLNFM and look-up tables with continuous input space and compatible with the preferred update algorithm.
  • Look-up table. When a sample is received in operating point x=ξ between local models i and /+1 the preferred update algorithm updates the two LMs as described in Section 2. Thereafter artificial samples ay (j) are created in the center of every LM located between the LMs i and i-r, and between i+1 and i+r+1. Subsequently the LMs within the "tent roof" are updated with the update algorithm. Note that the tent roof is formed by linear interpolation from the values of local models i and i-r in one direction and from local models i+1 and i+r+1 in the other direction. Figure 4 gives an example of the TRT applied on a look-up table with DA, using Algorithm 4.2. The radius of the tent roof is set to r=4 and consequently 3 artificial samples are created in each direction of the operating point.
  • Algorithm 4.2: Modified Tent Roof Tensioning
    • Update operating point y ^ k + 1 = u y ^ k , y k ξ
      Figure imgb0052
    • Create artificial samples y a k + 1 j = y ^ k + 1 i + 1 + y ^ k i + 1 + r - y ^ k + 1 i + 1 x i + 1 + r - x i + 1 x j - x i + 1 , j i + 2 , i + 3 , i + 4 , , i + r
      Figure imgb0053
      y a k + 1 j = y ^ k + 1 i + y ^ k i - r - y ^ k + 1 i x i - x i - r x i - x j , j i - 1 , i - 2 , i - 3 , , i + 1 - r
      Figure imgb0054
    • Update actualized LMs y ^ k + 1 = u y ^ k y a k + 1 j , j i + 2 , i + 3 , i + 4 , , i + r i - 1 , i - 2 , i - 3 , , i + 1 - r
      Figure imgb0055
  • Figure illustrates a tent roof tensioning with DA on a look-up table, wherein artificial samples are indicated by (
    Figure imgb0056
    ), a measurement sample by (∗), an estimated map before adaptation by (_._), and an estimated map after adaptation by (__) .
  • When the algorithm is applied on two dimensional look-up tables the algorithm starts by placing a base for the tent, which forms a square around the updated local models (i 1 ,i 2 ),(i 1 ,i 2+1), (i,+1,i 2 ), (i 1+1,i 2+1). The square is given by connecting nodes between the nodes (i 1, -r,i 2 -r),(i, -r,i 2 + 1 + r),(i 1 +1 + r,i 2 -r),(i 1 +1 + r,i 2 + 1+ r), which are the corners of the square. The tent roof is subsequently formed by linear interpolation between the base square and the square formed by the nodes in the interpolation area of the operating point (i 1 ,i 2 ),(i 1 ,i 2+1),(i 1+1,i 2),(i 1+1,i 2+1), and all the nodes within this roof are given artificial samples, with values given by the tent roof in the coordinate of the actualized LMs.
  • LLNFM. Implementing the algorithm on a one dimensional LLNFM is very similar to the case of a look-up table given above. The tent roof is formed by linear interpolation between LM (i) and LMs (i-r), (i+r). Note that the tent roof is in general asymmetric with respect to (i) due to the non-uniform distribution of the LMs, i.e.|x (i) - x (i-r)| ≠ |x (i) - x (i+r)|.
  • If the LLNFM is of two dimensions, a simple solution is to form a tent base with the distance r irrespective of the LMs, i.e. x i 1 i 2 ± r j , x i 1 i 2 ± j r , j - r + r ,
    Figure imgb0057
    and actualize all LM within the tent base.
  • A more sophisticated solution is to form the tent base around the operating point by first identifying all LLM with a validity function equal to a threshold in the coordinated x (i) of the LM where the operating point lies, i.e. ∇ j, where Φ j (x (i)) = Φ tentbase. The tent base is thereafter formed by drawing straight lines between the coordinates of all the identified LM j which surround LM i. The values on the tent base needed for the tent roof can be read directly from the value of the map in the needed coordinate. All local models / within this tent base, i.e. ∀
    Figure imgb0058
    ≠i, where Φ l (x (i))Φ tentBase ,are subsequently actualized analogously to the case of two dimensional look-up tables. Hence the radius of the tent base is determined by Φ tentBase instead of by r. Moreover the tent base has not the form of a square but of a polygon, due to the non-uniform distribution of the local models.
  • 4.2 Local Pattern Regression Models
  • The first subsection will give the basic version of the algorithm and the second will discuss various extensions to it.
  • 4.2.1 Basic Version
  • In this algorithm there exists a local straight line model in every transition between two adjacent LMs (equation (45)), these are referred to as regression models. They give an estimated relationship between the values of two neighboring LMs. In this actualization algorithm the values of the artificial samples are given by these local pattern regression models. If e.g. an artificial sample is to be generated in LM i+1 the value of it is estimated from the value of a neighboring LM e.g. i, with the regression model between the two of them. Note that equation (45) is independent of the input coordinate x of the map, merely the value of LM i determines the value of the artificial sample in LM i+1. Moreover, there is a unique regression model in both directions between every adjacent LM, i.e. β 0 i , i + 1 β 1 i , i + 1
    Figure imgb0059
    and β 0 i + 1 , i β 1 i + 1 , i .
    Figure imgb0060
    The estimated value of (i+1) in equation (45) is denoted with a capital letter to distinguish that it is an estimation. Y i + 1 = β 0 i , i + 1 + β 1 i , i + 1 y ^ i
    Figure imgb0061
  • The basic version of the algorithm starts by updating the operating point k+1(ξ)which is between say local models i and i+1. Thereafter artificial samples are created in local models i-1 and i+2 and the update algorithm is applied on these samples. Subsequently artificial samples are created in local models i-2 and i+3 from the levels of local models i-1 and i+2 respectively. This continues until the whole map is adapted. The basic version of the local pattern regression models (LPRM) algorithm, implemented on a one-dimensional look-up table is summarized in an iterative form in Algorithm 4.3.
  • The individual regression models are defined by two parameters; rakeβ1 and bias β0. With these, different patterns between the two belonging local models can be represented. A change of the height in LM i can for example result in a large or a small height change in LM i+1. It is even possible to have a negative rake parameter, which results in a height increase in i+1 when the height of i decreases. With this discussion in mind and remembering that each transition has a unique regression model, one may conclude that these simple regression models can give complex patterns in the actualization of the map. This conclusion is verified by the simulations in Section 5.
  • Algorithm 4.3: Basic Local Pattern Regression Models
  • Figure imgb0062
    Figure imgb0063
  • Figures 5a -5e illustrate an example of LPRM, artificial samples indicated by (
    Figure imgb0064
    ), a measurement sample by (∗), an estimated map before adaptation by ( -·-), and an estimated map after adaptation by (__). A simple example of the algorithm with the DA used as update algorithm is illustrated in Figure 5a 5e. The example starts with the map receiving a measurement sample y between LM 1 and LM 2 in Fig. 5a. These samples are subsequently updated with the DA algorithm. Thereafter an artificial sample a y (3) is created in LM 3 with the value given by a regression model that estimates the value in LM3 from LM2 (regression model 2-3), as shown in Fig. 5b and LM 3 is accordingly updated with respect to the artificial sample, as shown in Fig.5c. The same procedure follows by forming an artificial sample in LM 3 based on the level of LM 2 and their intermediate regression model. Fig. 5d shows an estimation of an artificial sample in LM 4 from LM 3. In Fig 5e, LM 4 receives artificial sample.
  • 4.2.2 Extended Version
  • In this section three extensions to the basic version are presented. The first is a method which weighs the artificial samples according to their uncertainty; the second extension is a way to limit the actualization to areas which have not received real samples within a predetermined time; while the third extension is a method to adapt the regression models online. Some extensions and associated problems assume that RLS is used as update algorithm, thus the RLS will be the standard update algorithm in this subsection. In the end of the subsection different modifications due to map representation and map dimension will be discussed.
  • Weighing artificial samples. One problem with the basic version is that the uncertainty of the values of the generated artificial samples increases with the distance from the operating point. This is because the generated samples are based on estimations from linear regression models and these are associated with an uncertainty, i.e. their confidence intervals.
  • Here it is assumed that the training data used for the optimization is normally distributed and mean is equal to the regression model. This implies that the confidence intervals follow the Student-t distribution. This is described in Milton, J.S. and Arnold, J.C. (1995). Introduction to Probability and Statistics - Principles and Applications for Engineering and the Computing Sciences, hereinafter referred to as [Milton and Arnold, 1995]. Equations (46)-(48) [Milton and Arnold, 1995] gives the confidence interval conf i,i+1 when an artificial sample is generated in LM i+1 and estimated from the level of LM i. The parameter t α/2 is given by the student-t distribution with respect to the number of samples N and the degree of confidence100(1-α). The choice of the degree of confidence is arbitrary because the confidence intervals will be transformed, which is described below. conf i , i + 1 = t / α 2 S 1 + 1 N + y i - y i 2 S y i y i
    Figure imgb0065
    S y i y i = j = 1 N y j i - y j i 2
    Figure imgb0066
    S = j = 1 N β 0 i , i + 1 + β 1 i , i + 1 y j i - y j + 1 i 2 N - 2
    Figure imgb0067
  • This uncertainty can be taken into consideration by making use of the possibility of weighing the samples in the RLS-algorithm. The sizes of the weights are based on the sum of the confidence intervals of the used regression models between the artificial sample and the operating point. The values of the weights w used in the update must have the relationship to the sum of confidence intervals confSum given in conditions (49) below. The sum of confidence intervals confSum is formed by summing all confidence intervals associated with every estimated artificial sample from the operating point in LM 1 to the latest artificial sample in LM i+1, according to equation (50). This transformation is done with a decreasing exponential function according to equation (51). w 0 , when confSum w 1 , when confSum 0
    Figure imgb0068
    confSum 1 i + 1 = conf i , i + 1 + conf i - 1 , i + + conf 1 , 2
    Figure imgb0069
    w i , i + 1 = exp - δ × confSum 1 i + 1
    Figure imgb0070
  • The choice of the exponential function is based on the following arguments. The function is a well known and an easily predictable function; it will quickly converge to zero when confSum begins to grow and this convergence rate is easily determined by a constant δ. Other choices of transformations are naturally possible.
  • The above solution requires N sample pairs stored in each transition. This extra memory requirement can be omitted if the regression models are not reestimated online (which is described below) or the accuracy of the confidence intervals is of negligible importance. Then the confidence intervals can be given a fixed value.
  • There is no use to update areas when the sum of confidence intervals has grown to a point when the weight has become so small that the effect of the artificial samples is insignificant. So a restriction w > w limit is included in the algorithm; partly because the artificial samples makes the RLS update to forget older, more significant samples, and replace them with highly uncertain artificial samples and partly to save computation time.
  • Restricting actualization. Another problem with the basic version is that it generates artificial samples to areas which have recently been updated by real measurement samples. It must be clear that the artificial samples are only an estimate of the measured map; the ideal situation is to receive real samples in all LMs regularly. Hence the spreading of artificial samples should be restricted to areas that have not been updated with real measurement samples during a predetermined time Tact . Consequently it is necessary to measure the time K (i) since the last time the separate local models were updated with a measurement sample. The spreading cycle should neither continue on the other side of a newly updated LM, because the newly updated LM has recently actualized those areas with more accurate artificial samples. Therefore the actualization cycle is stopped when a newly updated local model is reached. Here a central definition must be stated. Two types of changes that occur in the measured map can be distinguished:
    • Expected changes; the map varies in the way that is expected by the regression models y k i = Y i ,
      Figure imgb0071
      (probably more frequent).
    • Structural changes; the map varies in a new and unknown way, contrary to the estimation of the regression models y k i Y i .
      Figure imgb0072
  • The parameter Tact is determined by minimizing the error of the map emodel = |y (i) - ŷ (i) |. When artificial samples are not generated in LM i the local approximate maximum error in the LM is reached just before actualization is allowed, i.e. e avg i = y ˙ avg T act ,
    Figure imgb0073
    where avg is the average change rate of the map. On the other hand when artificial samples are generated in local model i the error converges to the error in the regression model e reg i = y i - Y i .
    Figure imgb0074
    The error in the regression model depends e.g. on the accuracy of the offline optimization, the number of samples N used for the regression, the rate of structural changes in the map, and the frequency of the optional online re-estimation of the regression models, which will be described below. Thus the optimization of parameter Tact is complex and depends on many uncertain parameters. Summarizing the trade-off; if avg is high Tact should be small and if the structural change rate is believed to be high or the regression models are uncertain in any way, Tact should be large.
  • Due to the forgetting factor λ in the RLS update algorithm, real measurement samples will be forgotten when artificial samples are spread to the local model, which starts when K (i) > Tact . Therefore λ may be given a higher value λ=λ act when it is acting on artificial samples so that real measurements won't be forgotten too quickly.
  • Regression model adaptation. It is possible to adapt the regression models online, so that they can keep track of structural changes in the map. This is done by storing N pairs of heights of adjacent local models, where the levels are measured at the same time. How these height pairs are collected is described below. By replacing older stored sample pairs with new ones, the local pattern regression model can be re-estimated.
    The theory of estimating linear regression models is based on the Gauss-Markov assumptions, which are given below.
    • The random residuals ε i have expected value 0
    • They are independent
    • They are normally distributed
    • They all have the same variance, i.e. homogeneous variance
  • Here only simple straight-line linear regression models have been used, though it is possible to create more complex regression models. The rationale for this is that two adjacent local models ought to vary in a similar way and to keep the model as simple as possible and to minimize memory and computational requirements. However if the variation pattern diverges from a linear pattern it will result in a biased estimation of the model parameters. This is described in Raw lings, J.O., Pantula, S.G., and Dickey, D.A. (1998). Applied Regression Analysis - A Research Too/. Springer-Verlag New York, Inc, hereinafter referred to as [Rawlings et al., 1998].
  • Structural changes in the map will have the effect that none of the Gauss-Markov assumptions are valid except normality. This could be interpreted that older measurements are more uncertain than newer ones i.e. heterogeneous variance var(ε i )≠ var(ε j ) . The negative effect on the estimation caused by heterogeneous variance can be reduced by using weighted least-squares in the regression model estimation [Rawlings et al., 1998]. The weights are set so that newer samples have bigger impact on the estimation than the older ones. If the rate of structural changes is fast the weight difference should be greater, on the other hand if they are slow the weight difference can be set smaller which will give the older values greater influence on the model estimation. This weight decrease is solved by the "weight decrease" algorithm given in the Appendix.
  • In the basic least-squares estimation, outliers have a big leverage on the values of the model parameters, because the estimation is based on squared errors. This could have the effect that one or a few less accurate measurements could severely distort the estimation. But the outliers should not completely be discarded because they could be the result of structural changes. This problem is solved by the introduction of a limit on the distance an outlier is permitted to be from the expected value given by the regression model. If an outlier is measured outside this limit, it is discarded and an artificial measurement will be placed on the current limit. Furthermore maximum and minimum values must be set on the parameters of the regression models or by forming formal inequality constraints on the output of the regression models, to ensure stability.
  • In the case of non-normality only the estimation of the confidence intervals will be affected while the parameter estimates are unaffected. Moreover normal distribution is a reasonable assumption in most cases [Rawlings et al., 1998].
  • The number of saved samples N used for the estimation of the regression models is determined by the following pros and cons. The major benefits with many samples are robustness against noise and more accurate models, while the benefits with few samples are smaller memory requirements and faster adaptation to structural changes.
  • Collecting new samples for the regression models. Note first that the samples used for the estimation of the regression models are the heights of the LMs, not the incoming measurement samples. Samples for the re-estimation of the regression models are only collected if two adjacent local areas are updated with real samples within a predetermined period Tcollect .
  • Otherwise changes may occur in the map during the time between the two samples which leads to incorrect estimation. Thus an upper limit of how fast the map changes, "maximum change rate", must be determined before implementation, which is defined by equation (52). y ˙ max max x , t y t x t
    Figure imgb0075
  • With this value known and a maximum error tolerance ecollect specified, the time limit Tcollect between the two samples can be determined. For clarification; the error tolerance refers here to the error in the collected sample pair due to variation in the map between the two measurements, which is defined by equation (53). By setting the change rate to max and integrating the time period Tcollect in equation (54) follows. T y ˙ t x t = e
    Figure imgb0076
    T collect = e collect y ˙ max
    Figure imgb0077
  • The essence of this is that maps with high change rates must have small Tcollect and for maps with low change rates the time limit can be set to a higher value. Thus maps with high change rates and low sampling frequency will seldom reestimate their regression models while maps with low change rates and high sampling frequency will reestimate their regression models more often. To avoid the occurrence of too high change rate and thus large errors in the regression models a restriction is included in the extended Algorithm 4.4, under task 1. Another cause of the same error is actualization between the two collected pairs which can occur if Tact < Tcollect .
  • Another problem that can lead to incorrect model estimation, when RLS is used, is if one of the local models (i) has been updated frequently and its value has converged to values of the samples, while the adjacent model (i-1) has just been updated once during a long period. This single sample may not change the height of the local model near the correct level because of the inertia in the RLS-algorithm. Note that this problem does not exist when DA is used as update algorithm. This is solved by keeping track of the time K (i-1) since a measurement was received in the LM and if enough time has passed since the last update one can assume that the latest measurement is much more relevant than the old ones. Thus artificial samples should be created so that the bias level will be adjusted to the proximity of the newest measurement. Thereafter the regression model can be re-estimated. See Algorithm 4.4 under task 3 for details. When the updating is done of these artificial samples the forgetting factor can advantageously be given a smaller value so that the old samples are forgotten and hence the next incoming sample won't be dominated by the recently generated artificial samples.
  • A third problem that can lead to incorrect estimation of the regression models due to how the samples are collected is if the measurement error is high. This is especially a problem when estimating models where the measurements occur seldom.
  • It is important how the old samples are replaced by the new ones. If the old samples are replaced in an aged order there is a high risk that the samples will cluster in a small region, which will lead to an uncertain estimation in regions outside the cluster. To overcome this problem the data used for the estimation should be present in a wide range of the movement of the bias levels. The pseudo code for the algorithm can be found in the Appendix.
  • Algorithm 4.4: Extended Local Pattern Regression Models
    • 1. Start, check change rate: One measurement sample yk(ξ) is received between LMs i and i+1. If the change rate of the map is too high readjust the incoming sample to an acceptable value.
      Figure imgb0078
    • 2. Update the current LMs ŷ k+1 = U {ŷk,yk (ξ)}.
    • 3. Readjust old LM (not used with DA): If the current LMs have not been updated for a long time, make sure that the map is sufficiently close to the latest measurement.
      Figure imgb0079
    • 4. Check requirements for re-estimation: If the requirements for adaptation of any of the belonging regression models should be adjusted. Requirements:
      1. a) If any of the current LM (i), (i+1) neighboring LMs (i-1) , ŷ (i+2) have been updated with measurement samples within time window T collect.
      2. b) No non-steady state situation between the two measurements has occurred.
    • 5. Re-estimate regression models: If the requirements in step four are fulfilled for LM i-1 or LM i+2, save the values of the two adjacent LMs and re-estimate the two regression models between them. Assume LM i+2 fulfills the requirements in 3.
      1. a) If any of the two bias levels is an outlier, e.g. Y | y ^ i + 1 i + 2 + y ^ i + 2 > outlierLimit ,
        Figure imgb0080
        adjust the value of it/them to an acceptable value. Do not use the adjusted sample for updating the map later, but use it for estimation the regression model.
      2. b) Save these values for the estimation of the regression models between the two LMs and discard an old sample pair with the replacement algorithm given in appendix.
      3. c) Adjust the weight matrix for the estimation of the regression model so that newer samples have greater leverage in the LS-algorithm. Use the algorithm given in appendix.
      4. d) Estimate the two regression models in both directions. Note that the statistics Syy ,S and, y change their values due to the re-estimation. Ensure that the regression model parameters are within their allowed values.
    • 6. Create artificial samples: Spread artificial samples from LM i and i+1 with recursive functions to all the adjacent LMs and beyond. The values of the samples are given by estimation from the bias level of LM i and i+1 and their respective regression models. The samples are placed in the coordinates of the LMs. The actualization is given below in an iterative form.
      Figure imgb0081
      Figure imgb0082
      The recursive functions will continue to spread artificial samples beyond the adjacent LMs until some restriction is fulfilled, when that happens the recursive function will stop and will not continue beyond the LM where the restriction was met. The restrictions are given below:
      1. a) If the end of the map is reached.
      2. b) If a LM recently has been updated with a real sample, i.e. K (j) < T act .
      3. c) If the weight w (j,j-1) , which is based on the sum of the confidence intervals, has diminished to an insignificant value.
    • 7. Wait: Here the algorithm has completed its cycle and will wait for the next incoming measurement sample.
    Summary of parameter settings
  • λ
    Forgetting factor used for updating measurement samples (RLS)
    λart
    Forgetting factor used for updating artificial samples from regression models (larger) (RLS)
    λ old
    Forgetting factor used for updating measurement samples due to avg K (i) old (smaller) (RLS)
    δ
    Determines the convergence rate of the weights (RLS)
    w limit
    Minimum weight for continued actualization (RLS)
    Tact
    Time limit of how early it is permitted to actualize since the latest received measurement in the current LM
    Tcollect
    Maximum time limit between two collected samples for reestimating regression models
    avg
    Probable average change rate of the map (RLS)
    max
    Highest tolerated change rate
    εold
    Highest tolerated residual between not recently updated LM and new measurement sample (RLS)
    β0max 1max
    Highest allowed values on the regression parameters
    β0min 1min
    Smallest allowed values on the regression parameters
    outlierLimit
    Outlier limit for regression model reestmation
    N
    Number of stored samples used for estimation of the regression models
  • Modifications. If the algorithm is applied on two dimensional look-up tables it starts by creating a public variable in the form of a matrix A which keeps track of which LMs that are allowed to be actualized. Therefore each time a LM is updated it changes its corresponding A -value to forbidden. The A -matrix is initialized before each actualization cycle so that the actualization is it stops at newly updated LMs (j 1,j 2), when K(j 1,j 2)<Tcollect . Moreover a boundary could be formed with respect to the LM of the operating point (i 1,i 2) and a newly updated LM (j 1 ,j 2 ) so that no actualization is done beyond this. Figures 6a-6b show example of possible boundaries for permitted actualization in two dimensional look-up tables, empty nodes start point of actualization, wherein filled nodes represent K (j 1,j 2)T act ,and striped nodes represent K (j 1 , j 2) <Tact . In Figure 6a a boundary is formed by cross section of the map, while Fig. 6b shows triangular shaped boundaries. Figure 6a shows an example of this where the boundary is formed by stopping actualization passed the cross section of the map with respect of the newly updated LM. Other geometrical boundaries are possible, e.g. forming a triangular shape from (j 1 , j 2 ) with respect to (i 1 ,i 2), as shown in Figure 6b.
  • The actualization is done by initializing actualization cycles from the local models (i 1, i 2), (i 1, i 2 +1), (i 1 +1, i2 ), (i1 +1, i 2 +1) , which are associated with the interpolation area of the operating point. Algorithm 4.5 provides the actualization cycles in an iterative form. Figure 7 depicts an example of the actualization procedure in a two dimensional look-up table. Otherwise the algorithm is straightforwardly derived from the one dimensional case.
  • Algorithm 4.5: 2-D (look-up table) LPRM / Actualization cycle from LM (i 1 ,i 2)
  • Figure imgb0083
    Figure imgb0084
  • / Actualization cycle from LM (i 1 , i 2 +1)
  • Figure imgb0085
    Figure imgb0086
    Figure imgb0087
  • / Actualization cycle from LM (i 1 + 1, i 2)
  • Figure imgb0088
    Figure imgb0089
    Figure imgb0090
  • / Actualization cycle from LM (i 1 +1,i 2 +1)
  • Figure imgb0091
    Figure imgb0092
    Figure imgb0093
  • Figure 7 shows an actualization cycle of LPRM on 2-D look-up tables, with LMs within the interpolation area of the operating point, indicated by circles (o), an outer loop, indicated by filled arrows (__), and an inner loop, indicated by dashed arrows ( _._).
  • If the algorithm is applied on one dimensional LLNFM no major modifications are needed. Note however that in the two dimensional case there are generally many transitions between adjacent LM in each dimension, due to the non uniform distribution of the LLM. The actualization cycle can be implemented similar to Algorithm 4.5, but here the cycle starts from one LM instead of four as in the case of look-up tables.
  • 4.2.3 Memory Requirements
  • The additional memory requirements needed when the local pattern regression models algorithm is used is analyzed here. The analysis is merely done with look-up tables used as map representation. This is because the number of transitions between adjacent LM is arbitrary in two and higher dimensional LLNFM, while the number of transitions are always twice in each dimension (non-border LMs) in look-up tables. Furthermore the small number of constants which doesn't depend on the number of local models in omitted.
  • The memory requirements for the basic version of the algorithm are two parameters β0,β1 for each transition between every neighboring LMs. Note that there is one regression model in each direction in every transition, but because the regression models are easily invertible it is sufficient to save one of them. How they are inverted is shown in Algorithm 4.6 below. When the algorithm is applied to one-dimensional look-up tables, the number of transitions adds up to (M-1 ) and thus 2(M-1) parameters are stored in memory. In two dimensional look-up tables the number of transitions in the first dimension is (M 1-1) M 2 and in the other dimension M 1 (M 2-1 ), thus the total number of stored parameters is 2(M 1 (M 2-1 ) + (M 1-1) M 2).
  • The extended version needs some additional stored parameters. First every LM needs to keep track of the time since they last received a measurement sample K(i), thus M additional variables in one dimensional look-up tables and M 1 M 2 in two dimensional tables. When online adaptation of the regression models are incorporated in the algorithm it needs to store 2N samples for each transition, which sums up to 2N(M-1) in one dimensional maps and 2N(M 1 (M 2 -1) + (M 1-1) M 2) in two dimensional case. The total number of stored variables and constants is given below. In the extended version the regression model parameters can be calculated each time they are needed and thereby saving memory at the expense of computation time. The expressions below give the minimum memory requirement.
  • Basic version for one dimensional look-up tables:
    • 2(M-1)
  • Extended version for one dimensional look-up tables:
    • 2N(M-1) + M
  • Basic version for two dimensional look-up tables:
    • 2(M 1(M 2-1) + (M 1-1) M 2)
  • Extended version for two dimensional look-up tables:
    • 2N(M 1 (M 2 -1) + (M 1 -1) M 2) + M 1 M 2 = 2N(2 M 1 M 2 - M 1 - M 2) + M 1 M 2
    4.2.4 Offline Optimization
  • The initial optimization of the regression models is done by generating samples from two maps, which is the a-priori information for the regression model parameters. These are referred to as the first and second boundary maps y boundary1 (X), y boundary2(X), They are supposed to enclose a maximum probable variation interval (max (y 0≤t<∞(x)),min(y 0≤t<(x))), that the map will have during operation. These maps might be given by the start map and a probable future map measured from e.g. a used engine or just a qualified guess. It is important that the boundary maps approximately enclose the variation interval. Otherwise the accuracy of the artificial samples decrease, which is reflected by larger confidence intervals. This occurs when the heights of the local models are far from the average bias level y of the N samples used for estimation of the regression models. Another consequence of bad placement of the boundary maps is that the replacement algorithm used in the re-estimation of the regression models may only replace one of a few of the stored samples. Thus it is important that the boundary maps closely demarcate the interval of the bias variations of the map. If one of the boundary maps is known to be near the center of the probable variation interval of the map, it should be moved to the probable boundary of the interval. In case this is done the details in the map should be linearly extrapolated with respect to the other map.
  • The offline optimization algorithm below generates samples by linear interpolation between the two boundary maps. Each local model is given N samples and these are of course placed in the center of the local models x (i).
  • Algorithm 4.6: Offline Beta Optimization
  • Figure imgb0094
    Figure imgb0095
  • 5 - SIMULATION
  • The purpose of the simulations described below is to explain how the algorithms work, detect weaknesses and strengths of different algorithms, and to develop general strategies for designing adaptive maps. The chapter consists of three simulations studies, where each Example consists of a few simulations. Example 1 will demonstrate some fundamentals of the nature of LPRMs. Example 2 will evaluate and demonstrate the nature of different adaptive strategies on real world engine maps, and Example 3 evaluates adaptive strategies on drive cycles on real engine maps with different levels of noise.
  • 5.1 Simulation - Example 1
  • This example will demonstrate the capacity of LPRM to actualize different patterns. The Example is implemented on LLNFM with six LLMs and the update is done with RLS. Three simulations are done, where the second boundary map Y boundary2 is different in each simulation; otherwise they all share the same properties. The simulations were done by placing 20 equal measurement samples (x = 0.2, y = 14) close to the second boundary map y init2 and within LM 2. Thus all changes outside LM 2 are done by actualization. Figure 8a shows the common initial map for all three simulations with Y boundary1 =sin(2πx)+10 and the map before the simulation. The estimated start map is indicated with a full line (___) and the boundary map by a dashed line (_._).
  • Figure 8b illustrates the end of the first simulation where the second boundary map differs from the first boundary map by having a larger bias term. In Figure 8b the final map uses y boundary2 =sin(2πx)+13 wherein the estimated map is indicated with a full line (___),the boundary maps by a dashed line (_._), the artificial samples by circles (o), and the measurement sample by an asterisk (∗). The result of the second simulation is given in Figure 8c, where the sine function in the second boundary map was multiplied with a larger factor. Figure 8c shows the final map with y boundary2 =4sin (2πx)+10, wherein the estimated map is indicated with a full line (___ ),the boundary maps by a dashed line (_._), the artificial samples by circles (o), and the measurement sample by an asterisk (∗). The last simulation in Figure 8d shows a combination of the two preceding simulations with both a larger bias term and a larger factor multiplied on the sine function. Figure 8c shows the final map with y boundary2 =3sin(2πx)+11, wherein the estimated map is indicated with a full line (-), the boundary maps by a dashed line (_._), the artificial samples by circles (o), and the measurement sample by an asterisk (∗).
  • Figures 8b-c contain two boundary maps, where the relatively lower amplitude map is referred to as boundary map 1 and the higher amplitude map is referred to as boundary map 2.
  • The simulations above show that the LPRM can actualize patterns, which estimates the initial maps with good accuracy. Bear in mind that all measurement samples in the simulations had the same value and coordinate. This ability to represent different patterns can be understood by remembering that each local pattern regression model has a set of two unique parameter values rake and bias. Note also that the values of the artificial samples can either decrease or increase when the measurement samples increase, see Figures 8b and 8c in input values larger than 0.5, where they increase in Figure 8b and decrease in Figure 8c. In Section 4 it was mentioned that the actualization should only change the value of the LMs, i.e. the appearance around the coordinate of the LMs should not be changed by the actualization. In the case of LLNFM the rake parameter should not be changed by the actualization. This phenomenon is clear in Figure 8c, where the actualized LLMs have preserved their rake parameter from the start map and only the bias parameter has changed. Compare the rake of the map around the artificial samples with boundary map 1.
  • 5.2 Simulation - Example 2
  • The simulations in this example are done on engine maps from the control system of an IC engine. The original maps were of two dimensions and gave the control system an estimated value of volumetric efficiency, from a given engine speed and intake manifold pressure. Here they are projected to one dimensional maps, by giving the manifold pressure a fixed value. The goal of this simulation is to evaluate how different actualization algorithms perform and how they work.
  • The simulations need a drive cycle which should be a model of how the values of the measurement samples of engine speed could vary in a real world drive cycle. For the drive cycle to be appropriate it should fulfill some requirements. Firstly the speed has inertia, thus the driving cycle should incorporate some dynamics. Secondly the excitation should hover around an engine speed which is common during normal driving conditions and during a short time excite a higher engine speed, which could correspond to an acceleration phase. Thirdly more than one driving cycle should be generated, so that possible flaws can be detected.
  • The drive cycle model is realized by a moving average (MA) time series model, given in equation (55) and the engine speed is accordingly given in equation (56). The MA-process is driven by white normally distributed noise with variance 1. The model is designed to give a simple model, which fulfills the above mentioned requirements. The process reaches the higher engine speed by adding a constant to the input noise during a short time in the drive cycle. The drive cycle ends by disengaging the MA-process and giving the input a stable and low engine speed so that all cycles ends in the same position, for better visual comparison. z MA k = i = 1 6 1.1 - 0.1 i z MA k - i + 0.2 v k , z MA k = 0 , k < 0
    Figure imgb0096
    x k = 100 z MA k + 2200
    Figure imgb0097
  • The driving cycle consist of 500 measurement samples and between samples 50 and 70 the MA-process will be fed with an additional constant in its input noise, resulting in higher engine speed. An example of a realization of the driving cycle is given in Figure 9 and Figure 10 showing the histogram of the drive cycle realization in Figure 8a-d, with 40 containers of samples, which gives an approximate density function of the drive cycle.
  • The following simulations compare the TRT with the LPRM (with re-estimation of the regression models). No measurement noise will be added and the DA will be used as update algorithm. Two different maps will be tested in the simulations. The first map, "Map 1", is a perfect match with the boundary maps used as a priori-information, as indicated in Figures 11a-d and 12a-d. The other map, "Map 2", is an alteration of Map 1 which does not fully agree with the boundary maps, as indicated in Figures 13a-d and 14a-d.
  • During the first 300 samples the real map, from which the samples are generated, will rise from a level close to the lower boundary map to a level close to the higher boundary map. The map will then decrease to its initial position during the remaining 200 samples. The movement of the Map 1 is done by linear interpolation between the initial maps using 300 equidistant steps up and 200 equidistant steps down. The movement of Map 2 is done with the same method except that it will be altered from Map 1 by an addition of a sine function. The initial estimated map will have the same values as the lower boundary map. The parameter settings of the simulation are given in the Appendix.
  • Figures 11a-d to 14a-d show snapshots of the simulations, which are taken at the first sample k=1; during the short period of higher engine speed k=60; the highest point of the map k=300 and the last sample k=500.
  • Figure 11 a shows a snapshot at sample k=1 of simulation of Map 1 with TRT, estimated map indicated by a full line (__), the measured map indicated by a dashed line (---), the boundary maps indicated by dash-dotted lines (-·-), artificial samples indicated by circles (o), and the measurement sample indicated by an asterisk (∗).
  • Figure 11 b. Snapshot at sample k=60 of simulation of Map 1 with TRT, estimated map indicated by a full line (___), the measured map indicated by a dashed line (---), the boundary maps indicated by dash-dotted lines (-·-), artificial samples indicated by circles (o), and the measurement sample indicated by an asterisk (*).
  • Figure 11c. Snapshot at sample k=300 of simulation of Map 1 with TRT, estimated map indicated by a full line (___ ), the measured map indicated by a dashed line (---), the boundary maps indicated by dash-dotted lines (-·-), artificial samples indicated by circles (o), and the measurement sample indicated by an asterisk (*).
  • Figure 11d. Snapshot at sample k=500 of simulation of Map 1 with TRT, estimated map indicated by a full line (___ ), the measured map indicated by a dashed line (---), the boundary maps indicated by dash-dotted lines (-·-), artificial samples indicated by circles (o), and the measurement sample indicated by an asterisk (∗).
  • Figure 12a. Snapshot at sample k=1 of simulation of Map 1 with LPRM, estimated map indicated by a full line (___ ), the measured map indicated by a dashed line (---), the boundary maps indicated by dash-dotted lines (-. -), artificial samples indicated by circles (o), and the measurement sample indicated by an asterisk (∗).
  • Figure 12b. Snapshot at sample k=60 of simulation of Map 1 with LPRM, estimated map indicated by a full line (__), the measured map indicated by a dashed line (---), the boundary maps indicated by dash-dotted lines (__.__), artificial samples indicated by circles (o), and the measurement sample indicated by an asterisk (*).
  • Figure 12c. Snapshot at sample k=300 of simulation of Map 1 with LPRM, estimated map indicated by a full line (__), the measured map indicated by a dashed line (---), the boundary maps indicated by dash-dotted lines (_._), artificial samples indicated by circles (o), and the measurement sample indicated by an asterisk (∗).
  • Figure 12d. Snapshot at sample k=500 of simulation of Map 1 with LPRM, estimated map indicated by a full line (__), the measured map indicated by a dashed line (---), the boundary maps indicated by dash-dotted lines (-·-), artificial samples indicated by circles (o), and the measurement sample indicated by an asterisk (∗).
  • Figure 13a. Snapshot at sample k=1 of simulation of Map 2 with TRT, estimated map indicated by a full line (__), the measured map indicated by a dashed line (---), the boundary maps indicated by dash-dotted lines (_._), artificial samples indicated by circles (o), and the measurement sample indicated by an asterisk (∗).
  • Figure 13b. Snapshot at sample k=60 of simulation of Map 2 with TRT, estimated map indicated by a full line (__), the measured map indicated by a dashed line (---), the boundary maps indicated by dash-dotted lines (- · -), artificial samples indicated by circles (o), and the measurement sample indicated by an asterisk (*).
  • Figure 13c. Snapshot at sample k=300 of simulation of Map 2 with TRT, estimated map indicated by a full line (___), the measured map indicated by a dashed line (---), the boundary maps indicated by dash-dotted lines (_._), artificial samples indicated by circles (o), and the measurement sample indicated by an asterisk (*).
  • Figure 13d. Snapshot at sample k=500 of simulation of Map 2 with TRT, estimated map indicated by a full line (__), the measured map indicated by a dashed line (---), the boundary maps indicated by dash-dotted lines (_._), artificial samples indicated by circles (o), and the measurement sample indicated by an asterisk (∗ ).
  • Figure 14a. Snapshot at sample k=1 of simulation of Map 2 with LPRM, estimated map indicated by a full line (__), the measured map indicated by a dashed line (---), the boundary maps indicated by dash-dotted lines (_._), artificial samples indicated by circles (o), and the measurement sample indicated by an asterisk (∗).
  • Figure 14b. Snapshot at sample k=60 of simulation of Map 2 with LPRM, estimated map indicated by a full line (__), the measured map indicated by a dashed line (---), the boundary maps indicated by dash-dotted lines (_._), artificial samples indicated by circles (o), and the measurement sample indicated by an asterisk (∗).
  • Figure 14c. Snapshot at sample k=300 of simulation of Map 2 with LPRM, estimated map indicated by a full line (__), the measured map indicated by a dashed line (---), the boundary maps indicated by dash-dotted lines (_._), artificial samples indicated by circles (o), and the measurement sample indicated by an asterisk (∗).
  • Figure 14d. Snapshot at sample k=500 of simulation of Map 2 with LPRM, estimated map indicated by a full line (--), the measured map indicated by a dashed line (---), the boundary maps indicated by dash-dotted lines (_._), artificial samples indicated by circles (o), and the measurement sample indicated by an asterisk (∗).
  • From the figures above, which shows snapshots of the LPRM, it can be seen that artificial samples are not generated in every possible LM in each snapshot. This is due to the parameter Tact =10 which hinders actualization passed LMs which have received measurement samples within the last 10 incoming samples.
  • Figures 14a-d shows a good illustration of the effects of re-estimation of the regression models. By comparing the pattern of the actualization of figures (a) and (d), it is clear that the regression models have been reestimated. It also shows that re-estimation has only been done in LMs which are in the range of the engine speeds in the drive cycle. Hence no re-estimation has been done in LMs located at engine speeds higher than 4000 and lower than 1700, which results in a preserved actualization pattern from the boundary maps.
  • Another noteworthy phenomenon can be found when comparing Figures 14 b, c, and d. In Figure 14b the regression models of the higher engine speeds are re-estimated, which is verified by the new actualization pattern in Figure 14d. But the new actualization pattern is not notably present in Figure 14c, which lies between Figure 14b and Figure 14d in time. This can be explained by the fact that the re-estimation was done by replacing sample pairs at the lower end of the height variation interval of the map, i.e. close to the lower boundary map, whereas the sample pairs closer to the upper boundary map is preserved. To achieve adaptation of the actualization pattern close to the upper boundary map, some sample pairs in this region needs to be replaced as well. This has happened in the region of normal engine speeds around 2200 where the actualization pattern lies between the upper boundary map and the real map, see Figure 14c. Although no artificial samples are generated in this snapshot, this conclusion can be drawn from knowing that no measurement samples have been received at low engine speeds, see the drive cycle. That is, merely artificial samples have been received at these low engine speeds. Table 5.1. Average error sum over five simulations
    Actualization Map
    1 Map 2
    No actualization 11640 16180
    TRT 10720 15380
    LPRM - without re-estimation 1698 8501
    LPRM - with re-estimation 1985 7186
  • A sum of the errors between the real map and the estimated map in 200 equidistant points over the maps total range measured after each received sample will be used as a performance measure. The numbers in Table 5.1 show the average of this error sum over five simulations in each given example. In addition to the examples depicted in Figures 11a-d to 14a-d, Table 5.1 also gives the average error sum of the case of no actualization and the LPRM with no re-estimation allowed.
  • The performance measure used here is a bit blunt because it does not weigh the errors after the frequency they are read. The map is more frequently read in normal cruising speeds than at e.g. extremely high engine speeds. A larger radius in the TRT than used in the simulations above gave a slightly smaller error sum. But it also resulted in more significant destructive learning in the range of normal cruising speed. Consequently the radius of the tent was kept smaller then its optimum with respect to the sum of errors.
  • Although the deviation of Map 2 with regard to the boundary maps, the TRT was outperformed by the LPRM with both allowed and restricted re-estimation. Comparing the performance of the LPRM on the two different maps, it is clear that the quality of the a priori-information is crucial for the performance of the adaptation. Furthermore as expected, allowing re-estimation of the regression models gave a lower error sum if the a priori-information is poor. But it is slightly higher in the case of perfect a priori-information. It is clear that the incorporated a priori-information gives an edge to LPRM. It is also clear that this holds for somewhat poor a priori-information as well.
  • 5.3 Simulation - Example 3
  • This simulation Example will compare adaptive strategies at different levels of measurement noise on the drive cycle from the previous Example and this will be done on both maps from the same Example. All the actualization methods in Table 5.1 will be evaluated with both RLS and DA as update algorithms and each combination are simulated five times and from that an average error sum is formed. No spatial filtering was done in the simulations. The noise is white normally distributed with three different values of standard deviation; low (σ n =0.01), high (σ n =0.06), and very high (σ n =0.15). Compare the standard deviations with the values of the maps in Figures 11a-d to 14a-d. The noise level should also be compared with the change rate of the map and the variation interval of the map. Tables 5.2-5.3 show the result of the simulations on the two different maps. For parameter settings, see Appendix. Table 5.2. Average error sum over five simulations, map 1
    Actualization σ n =0.01 σ n =0.06 σ n =0.15
    No actualization (DA) 11900 12400 13760
    No actualization (RLS) 15070 15120 15380
    TRT (DA) 10580 10780 12020
    TRT (RLS) 11120 11430 11870
    LPRM (DA) re-estimation 2238 4760 13110
    LPRM (RLS) re-estimation 5707 6026 6091
    LPRM (DA) no re-estimation 1874 4558 12120
    LPRM (RLS) no re-estimation 4430 4644 5368
    Table 7.3. Average error sum over five simulations, map 2
    Actualization σ n =0.01 σ n =0.06 σ n =0.15
    No actualization (DA) 16130 16680 18230
    No actualization (RLS) 18310 18640 18620
    TRT (DA) 15340 16080 17530
    TRT (RLS) 15110 15590 16240
    LPRM (DA) re-estimation 7270 8279 15750
    LPRM (RLS) re-estimation 9413 9492 1005
    LPRM (DA) no re-estimation 8589 9607 15295
    LPRM (RLS) no re-estimation 8965 9304 9366
  • No restrictions on the beta-parameters where set and the parameters ( max εold , outlierLimit ) used to avoid incorrect re-estimation of the regression models where given values with high acceptance levels on the change rate of the map. Hence with more restrictive parameter settings the performance for the simulations with re-estimation would have performed better. Furthermore the number of sample pairs used for estimation of the regression models was set to N=5 in the simulations, a higher number should give a better performance with re-estimation in high noise levels.
  • The change rate in the simulations is probably much higher than real world applications with slow variations. The map sweeps the interval between the two boundary maps in just 500 steps. This situation should give benefits to DA, with its high plasticity. Which is verified in the simulations where DA performs surprisingly well compared to RLS, where the forgetting factor was set to the relatively low value 0.9. Merely at very high noise levels does RLS typically perform better than DA. The simulations also showed that high spatial frequency can appear with the RLS if the initial variance elements are large, thus they should be given relatively small values.
  • Figure 15 shows a schematic illustration of a vehicle comprising a combustion engine (E), a driveline (D) and an electronic control unit (ECU) for controlling said combustion engine (E) and said driveline (D). Sensors (10, 11, 12) for measuring at least one engine or driveline parameter are connected to the electronic control unit (ECU) in order to provide measured samples. The electronic control unit (ECU) is provided with maps of measured or estimated samples for at least one of the said engine or driveline parameters. The maps are adapted using the method described above.
    Figure imgb0098
    Figure imgb0099
    Figure imgb0100
    Figure imgb0101
  • NOTATIONS Abbreviated Terms
  • LLNFM
    Local linear neuro-fuzzy models
    LM
    General local models
    LLM
    Local linear models in the LLNFM representation
    LPRM
    Local pattern regression models
    LOLIMOT
    Local linear model tree
    DA
    Direct adjustment
    NLMS
    Normalized least mean squares
    RLS
    Recursive least squares
    LC
    Local correction
    TRT
    Tent roof tensioning
    Constants, Functions, Sets, and Variables
  • x∈ X {X⊂
    Figure imgb0102
       D } Input space (independent variable)
    x(k)  The independent variable at time k
    x (i)  Coordiante of LM i
    x = ξ  x-value at the operating point
    y ∈ Y {Y⊂
    Figure imgb0103
    }  Output space
    y(x) : X → Y  Measured map
    yu (x) : X → Y  Noise free measured map
    (x) : X → Y  Estimated map
    k (x)  Map at time k
    (i)  Map value in LM i
    y a k i
    Figure imgb0104
      Artificial sample placed in LM i at time k
    k   Stored map and variance/covariance matrices stored in memory.
    n  Additive measurement noise
    ci   Coordinate of node i (one-dimensional look-up table)
    c 1,i  Grid line in dimension 1 at location i (two-dimensional look-up table)
    (c 1, i c 2, j)  Coordinate of node (i,j) (two-dimensional look-up table)
    θ i   Height parameter of node i (one-dimensional look-up table)
    θ i,j   Height parameter of node (i,j) (two-dimensional look-up table)
    θ i0  Bias parameter of LLM i (LLNFM)
    θiD   Rake parameter in dimension D of LLM i (LLNFM)
    Φ i x θ i nl
    Figure imgb0105
      Basis function in the basis function framework
    Li (x,θi )  Linear function in the basis function framework
    M  Number of nodes in a one-dimensional map or a LLNFM of arbitrary dimension
    M 1 ×M 2  Number of nodes in a two-dimensional map
    S  Number of samples for initial optimization
    P(k)  Covariance matrix in the RLS algorithm at time k
    w (i,i+1)   Weight used in the RLS algorithm when local model i is updated with an artificial sample generated from model i+1
    K (i)  Number of measurement samples since LM received a measurement
    β 0 (i,i+1)  Regression model parameter (bias), from model i to i+1
    β 1 (i,i+1)  Regression model parameter (rake), from model i to i+1
    Y (i+1)  Estimation of bias level in i+1 from regression model
    Q  Weight matrix
    δ   Determines the convergence rate of the weights
    w limit  Minimum weight for continued actualization
    Tact   Time limit of how early it is permitted to actualize since the latest received measurement in the current LM
    Tcollect   Maximum time limit between two collected samples for reestimating regression models
    avg   Average change rate of the map
    max  Highest tolerated change rate
    ε old   Highest tolerated residual between updated LM and new measurement sample
    β 0max 1max   Highest allowed values on the regression parameters
    β 0min 1min   Lowest allowed values on the regression parameters
    outlierLimit  Outlier limit for regression model reestmation
    N  Number of stored samples used for estimation of the regression models

Claims (19)

  1. Method for adapting a combustion engine or a vehicle driveline control map, which map is expressed by a basis function equation (Eq. 1), characterized by the method involving the steps of:
    • receiving a measured sample of an engine parameter at an operating point,
    • updating local models adjacent of the operating point using an update algorithm,
    • generating artificial samples in coordinates of local models located remote from the operating point and the said adjacent local models, and
    • updating the said remote local models using an update algorithm.
  2. Method according to claim 1, characterized by generating artificial samples in coordinates of local models in a map expressed by the equation y ^ x = L M L i x θ i Φ i x θ i nl
    Figure imgb0106

    where
    θi is a height parameter of node i,
    Li (x,θi ) is a linear function in the basis function framework,
    Φ i x θ i nl
    Figure imgb0107
    is a basis function in the basis function framework, where (nl) indicates that the function is non-linear,
    M is the number of nodes in a one-dimensional map or a LLNFM of arbitrary dimension
  3. Method according to claim 1, characterized by generating artificial samples using an actualizing algorithm.
  4. Method according to claim 2 or 3, characterized by generating artificial samples using a local pattern regression model (LPRM).
  5. Method according to claim 4, characterized by updating the local models in a map represented by a look-up table.
  6. Method according to claim 5, characterized by updating the local models using a recursive least squares (RLS) algorithm.
  7. Method according to claim 5, characterized by updating the local models using a direct adjustment (DA) algorithm.
  8. Method according to claim 5, characterized by updating the local models using a least means squares (NLMS) or a normalized lest means squares (LMS) algorithm.
  9. Method according to claim 4,characterized by updating the local models in a map represented by a local linear neuro-fuzzy model (LLNFM).
  10. Method according to claim 9, characterized by updating the local models using a recursive least squares (RLS) algorithm.
  11. Method according to claim 9, characterized by updating the local models using a least means squares (NLMS) or a normalized lest means squares (LMS) algorithm.
  12. Method according to claim 2 or 3, characterized by generating artificial samples using a tent roof tensioning (TRT) algorithm.
  13. Method according to claim 12, characterized by updating the local models in a map represented by a local linear neuro-fuzzy model (LLNFM).
  14. Method according to claim 13, characterized by updating the local models using a recursive least squares a (RLS) algorithm.
  15. Method according to claim 13, characterized by updating the local models using a least means squares (NLMS) or a normalized lest means squares (LMS) algorithm.
  16. Method according to claim 12, characterized by updating the local models in a map represented by a look-up table.
  17. Method according to claim 16, characterized by updating the local models using a recursive least squares (RLS) algorithm.
  18. Method according to claim 16, characterized by updating the local models using a least means squares (NLMS) or a normalized lest means squares (LMS) algorithm.
  19. Vehicle comprising an electronic control unit (ECU) for controlling a combustion engine (E) or a vehicle driveline and sensors (10, 11, 12) for measuring at least one engine or driveline related parameter, where the electronic control unit (ECU) is provided with a map of measured or estimated samples for the said at least one engine or driveline related parameter, characterized in that the map is adapted using the method of claim 1.
EP20070104811 2007-03-23 2007-03-23 A method for adapting a combustion engine control map Active EP1972767B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20070104811 EP1972767B1 (en) 2007-03-23 2007-03-23 A method for adapting a combustion engine control map
DE200760012825 DE602007012825D1 (en) 2007-03-23 2007-03-23 Method for adapting a control card for an internal combustion engine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP20070104811 EP1972767B1 (en) 2007-03-23 2007-03-23 A method for adapting a combustion engine control map

Publications (2)

Publication Number Publication Date
EP1972767A1 true EP1972767A1 (en) 2008-09-24
EP1972767B1 EP1972767B1 (en) 2011-03-02

Family

ID=38344780

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20070104811 Active EP1972767B1 (en) 2007-03-23 2007-03-23 A method for adapting a combustion engine control map

Country Status (2)

Country Link
EP (1) EP1972767B1 (en)
DE (1) DE602007012825D1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2588734A1 (en) * 2010-07-02 2013-05-08 Robert Bosch GmbH Method for determining a correction characteristic curve
CN108999709A (en) * 2017-06-07 2018-12-14 罗伯特·博世有限公司 Method for calculating the aeration quantity of internal combustion engine
CN111368994A (en) * 2020-02-12 2020-07-03 平安科技(深圳)有限公司 Node parameter updating method based on neural network model and related equipment
WO2020216471A1 (en) * 2019-04-26 2020-10-29 Perkins Engines Company Limited Internal combustion engine controller
WO2020216470A1 (en) * 2019-04-26 2020-10-29 Perkins Engines Company Limited Engine control system
CN114167164A (en) * 2021-11-11 2022-03-11 潍柴动力股份有限公司 High-side power supply detection method and device, electronic equipment and storage medium
CN111368994B (en) * 2020-02-12 2024-05-28 平安科技(深圳)有限公司 Node parameter updating method and related equipment based on neural network model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10140376A1 (en) * 2000-08-26 2002-03-14 Ford Global Tech Inc Calibration process for direct injection stratified charge engine relates estimated fuel to torque in iterative process
EP1772611A1 (en) * 2005-10-05 2007-04-11 Delphi Technologies, Inc. Controller and control method for switching between engine operating modes

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10140376A1 (en) * 2000-08-26 2002-03-14 Ford Global Tech Inc Calibration process for direct injection stratified charge engine relates estimated fuel to torque in iterative process
EP1772611A1 (en) * 2005-10-05 2007-04-11 Delphi Technologies, Inc. Controller and control method for switching between engine operating modes

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2588734A1 (en) * 2010-07-02 2013-05-08 Robert Bosch GmbH Method for determining a correction characteristic curve
CN108999709A (en) * 2017-06-07 2018-12-14 罗伯特·博世有限公司 Method for calculating the aeration quantity of internal combustion engine
CN108999709B (en) * 2017-06-07 2022-12-30 罗伯特·博世有限公司 Method for calculating the charge of an internal combustion engine
WO2020216471A1 (en) * 2019-04-26 2020-10-29 Perkins Engines Company Limited Internal combustion engine controller
WO2020216470A1 (en) * 2019-04-26 2020-10-29 Perkins Engines Company Limited Engine control system
CN113728159A (en) * 2019-04-26 2021-11-30 珀金斯发动机有限公司 Engine control system
GB2585178B (en) * 2019-04-26 2022-04-06 Perkins Engines Co Ltd Engine control system
US11898513B2 (en) 2019-04-26 2024-02-13 Perkins Engines Company Limited Internal combustion engine controller
US11939931B2 (en) 2019-04-26 2024-03-26 Perkins Engines Company Limited Engine control system
CN111368994A (en) * 2020-02-12 2020-07-03 平安科技(深圳)有限公司 Node parameter updating method based on neural network model and related equipment
CN111368994B (en) * 2020-02-12 2024-05-28 平安科技(深圳)有限公司 Node parameter updating method and related equipment based on neural network model
CN114167164A (en) * 2021-11-11 2022-03-11 潍柴动力股份有限公司 High-side power supply detection method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
DE602007012825D1 (en) 2011-04-14
EP1972767B1 (en) 2011-03-02

Similar Documents

Publication Publication Date Title
EP1972767B1 (en) A method for adapting a combustion engine control map
Shang et al. A note on the extended Rosenbrock function
US6668214B2 (en) Control system for throttle valve actuating device
US5070846A (en) Method for estimating and correcting bias errors in a software air meter
US20030158709A1 (en) Method and apparatus for parameter estimation, parameter estimation control and learning control
US8880321B2 (en) Adaptive air charge estimation based on support vector regression
Zhang et al. A survey on online learning and optimization for spark advance control of SI engines
CN111324038A (en) Hysteresis modeling and end-to-end compensation method based on gating cycle unit
Westenbroek et al. Adaptive control for linearizable systems using on-policy reinforcement learning
US20030229408A1 (en) Control system for plant
CN111256727B (en) Method for improving accuracy of odometer based on Augmented EKF
US20040049296A1 (en) Control system for plant
Wischnewski et al. Real-time learning of non-Gaussian uncertainty models for autonomous racing
CN109101759B (en) Parameter identification method based on forward and reverse response surface method
US6922626B2 (en) Control apparatus for exhaust gas recirculation valve
Zhao et al. Self-organizing approximation-based control for higher order systems
US20030051705A1 (en) Control system for throttle valve actuating device
US20220276621A1 (en) Control device for plant and controlling method of the same
CN111767981B (en) Approximate calculation method of Mish activation function
CN110987449A (en) Electronic throttle opening estimation method and system based on Kalman filtering
CN114355976A (en) Method for controlling unmanned aerial vehicle to complete trajectory tracking under wind disturbance based on learning
Mirman et al. Lattice theory and the consumer's problem
US20030062024A1 (en) Control system for throttle valve actuating device
CN113939775B (en) Method and device for determining a regulating strategy for a technical system
US20040117043A1 (en) Optimized calibration data based methods for parallel digital feedback, and digital automation controls

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK RS

17P Request for examination filed

Effective date: 20090320

17Q First examination report despatched

Effective date: 20090417

AKX Designation fees paid

Designated state(s): DE GB SE

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE GB SE

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 602007012825

Country of ref document: DE

Date of ref document: 20110414

Kind code of ref document: P

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602007012825

Country of ref document: DE

Effective date: 20110414

REG Reference to a national code

Ref country code: SE

Ref legal event code: TRGR

RAP2 Party data changed (patent owner data changed or rights of a patent transferred)

Owner name: VOLVO CAR CORPORATION

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20111205

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602007012825

Country of ref document: DE

Effective date: 20111205

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20190218

Year of fee payment: 17

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200324

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20200323

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200323

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20230221

Year of fee payment: 17