US3805032A - Method for minimizing the effects of process disturbances on state estimators - Google Patents
Method for minimizing the effects of process disturbances on state estimators Download PDFInfo
- Publication number
- US3805032A US3805032A US00233131A US23313172A US3805032A US 3805032 A US3805032 A US 3805032A US 00233131 A US00233131 A US 00233131A US 23313172 A US23313172 A US 23313172A US 3805032 A US3805032 A US 3805032A
- Authority
- US
- United States
- Prior art keywords
- signal
- observations
- magnitude
- estimate
- state
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/0205—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric not using a model or a simulator of the controlled system
- G05B13/026—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric not using a model or a simulator of the controlled system using a predictor
Definitions
- Woodley Programming Aspects of Direct Dig. Control. Instr. Soc. of Am. Proceed. 1st J.I. Symp. 1965 Montreal, p. 149-159.
- ABSTRACT In estimation of a process state from observations based upon process measurements there is utilized. a signal which is variable about zero and indicative of the magnitude and direction of change in a state observation resulting from process disturbances, then depending upon a combined function of the magnitude and duration of that signal there is produced another signal indicative of the probability that the process is being subjected to an unexpected disturbance.
- This invention relates to the art of estimating a process state and more particularly to a method for minimizing the introduction of inaccuracies into the estimations as a result of the disturbance of the process in a manner or at a time which is unexpected or unpredictable.
- a process state may for the purpose of this description be considered as that totality of variables whose values completely define a characteristic of the process or system.
- the art of estimating may be considered as including the art of filtering, smoothing and predicting as well as that art more specifically identified as estimating or state estimation.
- estimating may be seen to have a logical basis considering the fact that filtering and smoothing is essentially the estimation of the average value of the signal being filtered or smoothed while predicting is a form of estimating which utilizes a model for the expected behavior of the quantity to be predicted.
- State estimation itself is a data processing technique that computes a process state from measurements of process or system variables, using the mathematical model of the system or process and prior knowledge of its various inputs and outputs so that the output of a state estimator approahces the true state of the process.
- a method and means for carrying out that method are provided for minimizing the error in an estimated state of a process when that error results from the introduction of an unexpected process disturbance.
- the method includes automatically computing an estimate of the process state in response to both present and past values of the state observation and the production of a first signal variable about zero in dependence upon the magnitude and direction of variations in the state observations due to the process disturbances.
- a high value of that probability is, of course, statistically rare in truly random noise.
- There is then produced a change in the influence of past observations on the computation of the estimated state so that the effect on the estimate of the state due to previous values of the process variable is diminished as the probability that the process is being subjected to an unexpected disturbance increases.
- FIGS. la 1d are block diagrams showing alternate arrangements for obtaining a variation in the time constant applied to the computations of the state estimate.
- FIG. 2 is a circuit diagram, partially in block form, showing the utilization of the arrangement of FIG. 10 in providing an estimate of a measured variable of the process wherein the estimate is a filtered or smoothed value for a process measurement and the smoothing is accomplished by a first order lag.
- FIG. 3 is a flow chart showing the several computation steps which must be carried out by a digital computer to compute the variable weighting factor applied to the computation of the first order lag.
- FIG. 4 is a circuit diagram for the estimator which would be utilized in one of the arrangements of FIGS. 1a hi to provide an estimate of a process measurement which is undergoing a ramp change with the slope of the ramp also being estimated.
- FIG. 5 is a flow chart showing the computations to be made in a digital computer for computing both the estimated value of the variable undergoing the ramp change as well as the slope of the ramp.
- FIG. la there is shown in block diagram an arrangement for computing the state estimate it from the state observations z by means of estimator 10.
- the state observations z are obtained on line 11 by measurement of certain variables of the process 12, namely, those variables required to determine by virtue of the model forming estimator 10, the state estimate 2 on output line 13.
- the process 12 may be considered as being subjected to disturbances which are of an unexpected character or which appear at an unexpected time. Those disturbances may enter the process at its input, its output, or
- the disturbances and the measured process quantities which determine the state observations are as sumed to include one of more arbitrary characteristics such as shifts in steady state values, steps, sign waves, square waves, ramps, or dynamically shaped random noise.
- a first signal, 6, which may, for example, be an error signal indicative of the deviation of a measured process variable and hence variable about zero in magnitude and direction from its desired value.
- That signal is suppled by way of line 14 to a computer 16, generally indicated as an error adaptive control computer (EACC) which is later explained in detail and which is capable of producing as an output on line 18, a second signal P, indicative of the probability that the process 12 is being subjected to an unexpected disturbance of the type described above.
- EACC error adaptive control computer
- the probability calculation made by the computer 16 is shown as being a function of the magnitude of the signal 6 and time, t.
- the signal on line 18 is supplied to a function generator 20 which is designed to produce as an output on line 22 a signal l/T, which represents the time function of the relationship between the observed states z and the estimate i being calculated in the state estimator 10. It will be noted that the function generator 20 automatically produces the signal, l/T, as a function of the probability P.
- the time function l/T is supplied as a signal on line 22 to the estimator to modify the relationship between the state estimate it and the state observations 2 when an unexpected process disturbance occurs, as detected by a variation in the error signal, 6.
- FIGS. 2 and 4 Two of the various forms which the state estimator 10 may take are illustrated by the diagrams of FIGS. 2 and 4. Similarly, the characteristic of the function generator 20 will be illustrated and described more completely in connection with the description of FIG. 2, while the specific construction of one form of the computer 16 will be described in connection with FIG. 2.
- FIG. lb is a variation of a block diagram of FIG. 1a in which the signal 6 is obtained by a different means.
- the signal 6 should be what is known as a zero centered signal, or, in other words, it should be variable about zero in accordance with the polarity and magnitude of variations in the state observations z due to the process disturbance.
- FIG. 1b as well as FIGS. 1c and 1d illustrate the various sources of the signal 6 which may be satisfactorily used by the computer 16.
- FIG. 1b as well as FIGS. 1c and 1d illustrate the various sources of the signal 6 which may be satisfactorily used by the computer 16.
- the signal 6 is computed as a difference between the state observations z obtained from line 11 by way of line and a state estimate it, from line 17, where i, represents an estimate which may be calculated as an intermediate calculation required in the process of computing the state i, as, for example, the computations of i, in FIG.
- the signal a is determined by comparing the state observations 2 from line 15 with the state estimate )2, obtained by way of line 19 from line 13. The comparison is made by means of comparator 26. This comparison will provide the necessary zero centered signal 6 which will be indicative of the magnitdue and polarity of the error in the state estimate :2 where i is an estimate of the process variable observed as z by a process measurement.
- the signal 6 is obtained as a part of the computation in the estimator 10. That signal may, for example, be a signal such as the signal ahd I of FIG. 4, as will be described subsequently.
- FIG. 2 there is shown an arrangement for computing the state estimate 2 from the state observations 1 when it is desired that the estimated value of the state being observed should represent the value being observed with a reduction of fluctuations in the observed state due to any expected disturbance in the process such as small, random short duration changes in the process characteristic represented by the state being observed.
- the estimator 10 of FIG. 2 is shown as providing a first order lag whose time constantnt T is adjustable.
- the estimator includes an integrator 30 having a potentiometer 32 in the input circuit which is adjusted by servo 34 through the mechanical linkage 36.
- the integrator 30 also has another potentiometer 38 which is in the feedback circuit of the integrator 30 and is adjusted by mechanical linkage 360 so that'potentiometers 32 and 38 have like values such that l/T, the reciprocal of the time constant of the lag circuit, will correspond to the input to estimator 10 from line 22.
- the estimated state it is compared with the observed state z by comparator 26, shown as a differential amplifier, so that there is produced on line 14 a signal 6 which is a signal variable about zero in a magnitude and direction in accordance with the magnitude and direction of the variations in the state observations in response to disturbances in the process 12.
- Those changes in the observed state which represent expected disturbances, such as noise, are to be filtered out by the circuit of the estimator 10 so that the estimate it will more accurately represent the characteristic of the process 12 which is represented by the steady state portion of the state observation z.
- the inverse of the time constant T of the first order lag circuit forming estimator 10 should be small in magnitude so that there will be a relatively long delay in the estimate a? resulting from changes in the value of z. This will reduce the inaccuracy of the estimate it which might result from the estimate closely following a fluctuating observation responding to a noisy process measurement or a short term and/or small magnitude process distrubance.
- e is large or of long duration as a result, for example, of a large or persistent change in z, or both, the
- FIG. 2 provides an EACC 16 for producing on line 18 a signal representative of the probability that the process distrubance is unexpected and that signal is introduced into a function generator to produce a signal 1/T as set forth above.
- the EACC 16 is shown as being an anlog device of the type more fully disclosed in my U. S. Pat. No. 3,419,772 with block 40 corresponding to the three section filter circuit 26 of that patent and the relays 42, 44 and 46 corresponding to relays 46, 56 and 66 respectively, of that patent.
- the value of the signal must exceed the magnitude A K j 6 dt for a predetermined duration T where A is a predetermined magnitude corresponding to the lowest level of the probability'P and K is a constant.
- P which is the signal on line 18, is directly related to the probability that the signal 6 does result from a measurement indicating an unexpected disturbance in the process 12.
- Relay 44 is energized when the value of e exceeds the magnitude A K I 5 dt for a predetermined time T where A is greater than A,.
- relay 46 is energized when the value of e exceeds the magnitude A K I e dt for a predetermined time T where A is greater than A
- the slidewire 48 has a tap 48a set to provide a potential on line 18 which represents the value of P related to a signal e exceeding A, K j 6 dt for the period T and hence actuating'relay 42 to connect tap 48a and line 18 by way of contact 42a.
- the slide-wire 48 is supplied by a voltage source E, shown as a battery; and the tap 48a is adjusted to a lower potential point on the slidewire than is contact 48b which'provides the signal P when relay 44 is energized. Similarly, energization of relay 46 wlll provide asignal P'equal to E.
- the value of the signal P intorduced as an input on line 18 to the function generator 20 will be any one of three discreet magnitudes increasing in accordance with a joint function of the magnitude and duration of e detected by the circuit of block 40.
- the function generator 20 is preferably of a form which will produce a signal l/T between the minimum value l/T and a maximum value M on line 22 which is related to the input signal P on line 18, as shown in FIG. 2.
- the value of HT remains constant at a value l/T for values of P up to a certain value P such as 0.5, for example.
- P a certain value
- K a selected constant providing the desired characteristic for the relationship between P and l/T. K then equals M l/ T, for the selected maximum M for I /T.
- FIG. 3 shows an algorithm in flow chart form for carrying out the computation of the state estimate i in a digital computer so that i has a relationship to the state observation z similar to that accomplished by the circuit of FIG. 2.
- the first step of the algorithm shown in block 50 is the calculation of e(k), which is the value of e obtained by calculating from observations at the sample period k. This calculation may be by any of a number of the approaches shown in FIGS. 1a 1d. If, for example, e(k) is to be calculated in a manner similar to that shown in FIG. 2, then The result of the calculation called for in block 50 is then used in the calculation called for in block 52, namely,
- P(k) fle'(k), t) where that function may, for example, be calculated as disclosed in U. S. Pat. No. 3,633,009 issued to myself and a coworker, Thomas A. Green, on Jan. 4, 1972, namely, by calculation of the maximum P, for the value e(k), where t? 1 ffitq'i PL and where a is the standard deviation of the statistical model of the noise in the state observation as obtained from chart records of those observations by approximating the value as one eighth of the peak to peak value;
- a is the average time between zero crossings of the statistical model of the state observations as may be obtained from chart records by averaging over approximately one hundred zero crossings for proper resolution;
- t is the duration of the time that e exceeds a deviation 0,.
- the algorithm proceeds to a calculation of the weighting factor W(k) as shown in block 54 where which function may be W(k) T,/T; for 0 2 P(k) P and which is an exponentially shaped function similar to .ttntshowninbiqskll 9 F G-.. 2 p that it varies tween a minimum of value T,/T; and a inaximum of value equal to K T,/T,.
- T,/T is an approximation of 1 e s ,which holds for small values of the exponent, such expression being well known as the weighting factor of a first order lag.
- T is the period between samplings made to obtain the state observations and T, is the time constant of the filtering which the particular estimating procedure is intended to estimate.
- W(k) is compared with W(k-l the weighting factor calculated at the last sample interval, as shown in block 56 If W(k) is greater than or equal to W(k-l) then the algorithm follows path 58. The program thus proceeds to block 72.
- n is set equal to l/W(k).
- the program proceeds to block 64 where n is compared with K is related to the degree of correlation between present and past samples of the process quantities. It may be approximated by the ratio of the time between sample periods and the average time between zero crossings (T /a). This approximation will reduce to the order of 10 percent the correlation of the samples used in the filter as the weighting factor is decreasing.
- the value of K is preferably at its upper limit of 1 when samples of the signal can be considered or assumed to be uncorrelated.
- block 72 the calculation required to determine the state estimate X( k), when the estimate is to be the result of subjecting the observation to a first order lag, is shown as Having computed the state estimate in block 72, the program proceeds to block 74 where, as indicated, W(k) is stored in the memory cell reserved for W(kl) in preparation for the calculations for the next sample period.
- the weighting factor W(k) is a constant T,/T; when P is below P and that weighting produces a value 5c which follows an exponential function of the observation 2. If P(k) is above P the value of W(k) is l /n where n is related to the number of samples so that )2 iso related to 2 as to produce a value which follows a simple average of 2 rather than an exponential function.
- FIG. 4 there is shown another type of estimator which can be utilized in place of the estimator of FIGS. la 1d.
- the estimator of FIG. 4 is specifically designed as an analog form of a ramp detector.
- the state estimate 12 produced by the estimator 10 is an estimate of the value of the ramp signal z.
- the other output of the estimator 10 of FIG. 4, namely, 6, provides an estimate of the slope of the ramp of the observation z.
- the analog circuit required in the estimator 10 of FIG. 4 to produce the ramp estimate and the ramp slope estimate includes a circuit for producing a signal at point 80, namely, i, which represents an estimate of the observation after it has been subjected to a first order lag.
- the first order filter includes the integrator 82 in conjunction with the potentiometer 84 in the input circuit and the potentiometer 86 in the feedback circuit, the arrangement being similar to that of estimator 10 in FIG. 2.
- the signal 2 is then put through another first order filter which includes an integrator 88 in conjunction with potentiometer 90 in the input circuit and feedback potentiometer 92 in the feedback circuit to provide a signal on line 94, 3 which is introduced as one input to a differential amplifier 96.
- the other input to the amplifier 96 on line 98 is obtained from ponit 80 and is therefore representative of i,. That latter input is multiplied by 2 in the input circuitry of amplifier 96 so that the amplifier 96 is comparing twice the value of it, with the value of the signal on line 94 to produce on its output line 100 the signal it indicative of the estimate of the state observation z and hence predictive of the ramp signal.
- the signal :2 on line 94 is also utilized as one input to the differential amplifier 102, the other input being directly from point 80 along line 104.
- the amplifier 102 compares the value of i, with the signal fc on line 94 to provide an output signal on line 106. That output signal is then in turn supplied through the potentiometer 108 as an input to the amplifier 110 which provides the output signal (71 indicative of the estimate of the slope of the ramp signal z.
- the potentiomteters 84, 86, 90, 92 and 108 are all simultaneously adjusted by servo in response to a change in input signal to the estimator 10 on line 22, representing l/T, the reciprocal of the time constant of the circuit for obtaining the estimate.
- the servo 120 operates the mechanical couplings 122 to the several potentiometers above mentioned to provide the necessary simultaneous adjustment of the potentiometers to match their settings to the input signal supplied on line 22.
- the estimator 10 of FIG. 4 may not only be used in the configuration of FIG. la, but is peculiarly adaptable to use in the configuration of FIG. 1b in that the signal 2, required on line 17 of FIG. lb is obtainable from point 80 of FIG. 4. Likewise, the estimator of FIG. 4 is useful in the configuration of FIG. 10 and the configuration of FIG. la. In the configuration of FIG. 1d the signal on line 14 may be obtained directly from the output line of amplifier 110 of FIG. 4, for example. Use in the configuration of FIG. 10 is similar to that shown in FIG. 2.
- the estimation carried out by the circuit of FIG. 4, which is essentially a ramp detector, can be executed by a digital computer programmed to follow an algorithm of the type shown in the flow chart of FIG. 5 which includes as a first step the calculation of the signal :(k) as shown in block 150. That step is followed by the calculation of the probability P as a function of both e(k) and time t as shown in block 152. The step shown in block 152 is followed by the calculation of the weighting factor W(k) as a function of the probability P(k) as shown in block 154.
- the above 3 steps of the algorithm shown in blocks 150, 152 and 154 are similar to the steps shown in blocks 50, 52 and 54 of FIG. 3.
- the calculated value W(k) is then tested to determine if it is equal to orgieater than the value obtained as a result of the previous sample, namely, W(k-l) as shown in block 156. If the W(k) is not equal to or greater than W(k-l) the program proceeds to block 157, however, if W(k) is greater than or equal to W(k-1), a calculation is made of the intermediate estimate J'c,(kl) by the following equation as shown in block 158.
- (kl) indicates values obtained at the sample previous to the present sample
- (k2) indicates the values obtained from the sample just prior to (kl
- a calculation is made of the value (kl) as shown by the equation shown in block 162, namely,
- n would have been set equal to 1/W(k) and the program would have proceeded to block 159 where n would be examined to determine if it were greater than or equal to T,/T,. If it were determined to be not greater than or equal to T,/T then n would be incremented by the value K and W(k) would be set equal to 1/n, as shown in block 161, after which the program proceeds to line 160.
- the calculation shown in block 170 can be made to obtain one of the outputs, namely, (i representing the slope of the ramp of the state observation z. That calculation, however, is not made if W(k) 1, for in that case 11 is set to zero and the program proceeds to block 172. This will be evident from the use of block 171 to test W(k) for equality with a unity value followed by the setting of (i, to zero in block 169.
- the calculation shown in block 170 which follows the equation is carried out if W(k) is not equal to one as determined in block 171 and after the coefficient C(k) is determined in block 175 as and is used to give the ramp rate. Conventional values of C(k) can be obtained by reference to H.
- the algorithm of FIG. 5 can be executed in any of a number of digital computers having a compiler for accepting a Fortran IV program by use of the following program:
- instruction 28 is a subroutine instruction which causes the computer to read the value of a counter TM which is incremented independently by clock 1. The clock increments cell TM every second.
- the operation called for by instruction 31 uses a subroutine AISCAN to input a value, E external to the computer connected to input channel 190, setting flag NP when the new value of E is available for use by the program.
- Instructions 51 through 58 set up the procedure for following the path through block 157 and 159 and then through either blocks 161 or 163 of FIG. 5.
- Instructions 59 and 60 call for the calculations in blocks 158 and 162 of FIG. 5, respectively. Those calculations are followed by the calculations called for in blocks 166 and 168 as set forth in insturctions 62 and 63 which determine the filtered values i,(k) and i (k).
- Instruction 64 sets up the procedure for branching from block 171 to either instruction 65 or 67 for determining the slope of the ramp [1,. Those instructions are followed by instruction 69 which calls for the calculation of the estimated value of the ramp as set forth in block 172. Subsequently, instructions 71 through 76 update the storage terms as set forth in block 174.
- the program then uses subroutine DACON which converts and outputs variables P or PX, W(k) or WK, 2(k) or XlK, and (2 or Al in the form of voltages for control or display purposes as called for by instruction 78.
- the systems described above for state estimation are useful in application to any of a number of processes such as, for example, routine industrial processes, however, they are of particular advantage when used in connection with the estimation of a state of a power system.
- the state observation z for example, may be the power flow over a particular tie line of the power system and may be subject to random variations as a result of the normal connection and disconnection of loads in the power system and it may also be subject to large and/or long term changes due to unexpected increases or decreases in the load in the power system.
- the state which is estimated x may be the power flow over that particular tie line being observed with the objective being to eliminate the noise from the observed value without eliminating significant changes as by use of an estimator of the type shown in FIGS. 2 and 3.
- the signal being applied from the process on line 14, namely, 6, may be the computed area control error, that is, a frequency biased tie line deviation measurement as is normally used in power systems as an indication of the deviation of the area load on the system from the total area generation of the system.
- the estimator 10 is a ramp estimator of the type described in connection with FIGS. 4 and 5
- the system of FIG. lb may be used, and in that arrangement the signal 6 on line 14 is no longer the area control error as mentioned above, but is instead a comparison of the state observation 2 with an intermediate estimate 12,.
- FIGS. and 1d may likewise be used in connection with power systems in that they are similar to those of FIGS. la and lb except that the zero centered error signal e is obtained in a different way, as.
- the variation of the weighting factor in a state estimation calculation in accordance with the probability that the process disturbance is unexpected may be extended beyond the simple forms of estimators described above to the more complicated multi-variable state estimators of the type described for example in A Dynamic Estimator for Tracking the State of a Power System, IEEE Transactions on Power Apparatus and Systems, Sept.-Oct., 1970, Vol. 89, pp. 1670-1678 by A'tif S. Debs and Robert E. Larson, the previously referenced article by Debs and Larson.
- a is the average time between zero crossings of the first signal. 12. The method of claim 8 in which the computation of the present estimate of the process state is obtained by summing the previous estimates and a weighted difference between the present observation and a value corresponding to the previous estimate, and in which the change produced in the magnitude of the influence of the past observations is produced by changing said weighting in response to changes in said second signal so that the weighting is increased as said probability is increased.
- the method for automatically estimating a process state from state observations determined from sampled process measurements which are subject to unexpected variations comprising the steps of:
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Testing Or Calibration Of Command Recording Devices (AREA)
Abstract
In estimation of a process state from observations based upon process measurements there is utilized a signal which is variable about zero and indicative of the magnitude and direction of change in a state observation resulting from process disturbances, then depending upon a combined function of the magnitude and duration of that signal there is produced another signal indicative of the probability that the process is being subjected to an unexpected disturbance. The influence of previous state observations on the estimate, or in other words, the weighting or time constant involved in the calculation of the state estimate, is varied as a function of that probability so that as the probability increases the weighting increases (the time constant decreases) thus giving the previous observations of the process state decreased value in the determination of the present state estimate.
Description
United States Patent 1191 Ross [ METHOD FOR MINIMIZING THE EFFECTS OF PROCESS DISTURBANCES ON STATE ESTIMATORS [75] Inventor: Charles W. Ross, I-Iatboro, Pa. [73] Assignee: Leeds & Northrup Company,
I Philadelphia, Pa.
[22] Filed: Mar. 9, 1972 21 Appl. No.: 233,131
US. Cl. 235/150.l, 235/15l.1, 444/1 [51] Int. Cl. ..GOSb 13/02 [58] Field of Search..... 235/150], 150, 151, 151.1,
OTHER PUBLICATIONS Gray, Jr. Completely Adaptive Control of Processing Plants Instr. Soc. of America. Proceedings 1st 1.]. Sym. 1965 Montreal 1136-148.
Woodley: Programming Aspects of Direct Dig. Control. Instr. Soc. of Am. Proceed. 1st J.I. Symp. 1965 Montreal, p. 149-159.
Perkins et al., Deterministic Peram. Estimation for Near Opt. Feedback Centr. Automatica, Vol. 7, p.
PROCESS 5] Apr. 16, 1974 439-444; 1971 1st presentation June 1970. Morrison, Norman: Textbook: Introduction to Sequential Smoothing and Prediction 1969 McGraw Hill, Inc. pages 8 and 497 of interest.
Perreault: Ground Testing of Inertial Navigators to improve Dynamic Performance. IEEE Transaction on Aerospace and Electronic Systems, Vol. ABS. 5 No. 3, May 1969, pages 457-462.
Primary Examiner-Felix D. Gruber Attorney, Agent, or Firm-William G. Miller, Jr.; Raymond F. Mackay [5 7] ABSTRACT In estimation of a process state from observations based upon process measurements there is utilized. a signal which is variable about zero and indicative of the magnitude and direction of change in a state observation resulting from process disturbances, then depending upon a combined function of the magnitude and duration of that signal there is produced another signal indicative of the probability that the process is being subjected to an unexpected disturbance. The influence of previous state observations on the estimate, or in other words, the weighting or time constant involved in the calculation of the state estimate, is varied as a function of that probability so that as the probability increases the weighting increases (the time constant decreases) thus giving the previous observations of the process state decreased value in the determination of the present state estimate.
FESTIEKTBE 5a PATENTEI] APR I 6, I974 SHEET? 0F 4 FIG. 2"
PROCESS EACC a I I I I I I I .I
SERVO ESTIMATOR METHOD FOR MINIMIZING THE EFFECTS OF PROCESS DISTURBANCES ON STATE ESTIMATORS BACKGROUND OF THE INVENTION This invention relates to the art of estimating a process state and more particularly to a method for minimizing the introduction of inaccuracies into the estimations as a result of the disturbance of the process in a manner or at a time which is unexpected or unpredictable. A process state may for the purpose of this description be considered as that totality of variables whose values completely define a characteristic of the process or system. The art of estimating may be considered as including the art of filtering, smoothing and predicting as well as that art more specifically identified as estimating or state estimation. The grouping of these arts under the term estimating may be seen to have a logical basis considering the fact that filtering and smoothing is essentially the estimation of the average value of the signal being filtered or smoothed while predicting is a form of estimating which utilizes a model for the expected behavior of the quantity to be predicted. State estimation itself is a data processing technique that computes a process state from measurements of process or system variables, using the mathematical model of the system or process and prior knowledge of its various inputs and outputs so that the output of a state estimator approahces the true state of the process. In this connection reference should be had to the following:
Brown, R.G., Smoothing, Forecasting and Prediction of Discrete Time Series, Prentice Hall, Inc., Englewood Cliffs, N..I., 1962, and
Toyoda, J., Chen, M.S., Inoue, Y. An Application of State Estimation to Short-Term Load Forecasting, Part I: Forecasting Modeling, IEEE Transactions on Power Apparatus and Systems, Vol. PAS- 89, No. 7, pp. 1678-1682, 1970.
Debs, Atif S. Larson, R.E. A, Dynamic Estimator for Tracking the State of a Power System, IEEE Transactions on Power Apparatus and Systems,
Sept-Oct. 1970, Vol. 89, pp. 1670-1678.
If it is assumed that an adequate model is available for estimating the state variables then a major problem requiring solution is the determination of an optimum weighting factor for continually updating from an initial value the estimate of the state without producing gross inaccuracies when the process fails to follow the model because of an unexpected disturbance to the process itself or the measurements of the process vari ables. For the purpose of this description these disturbances will all be known as process disturbances. Several methods for solving the problem mentioned above have been proposed by others. Those methods include a method which is experimental in nature and a method which involves ramification of the covariance of the state estimation. Such methods have many drawbacks including the difficulty of application of the covariance method if the noise in the process or the measurement cannot be assumed to be white. The present invention overcomes the drawbacks of the prior art as will be evident from the description of the preferred embodiments.
SUMMARY OF THE INVENTION A method and means for carrying out that method are provided for minimizing the error in an estimated state of a process when that error results from the introduction of an unexpected process disturbance. The method includes automatically computing an estimate of the process state in response to both present and past values of the state observation and the production of a first signal variable about zero in dependence upon the magnitude and direction of variations in the state observations due to the process disturbances. There is produced in response to a joint function of the magnitude and duration of that first signal a second signal which is indicative of the probability that the process is being subjected to an unexpected disturbance. A high value of that probability is, of course, statistically rare in truly random noise. There is then produced a change in the influence of past observations on the computation of the estimated state so that the effect on the estimate of the state due to previous values of the process variable is diminished as the probability that the process is being subjected to an unexpected disturbance increases.
BRIEF DESCRIPTION OF THE DRAWINGS In the drawings, in which like reference characters refer to like elements:
FIGS. la 1d are block diagrams showing alternate arrangements for obtaining a variation in the time constant applied to the computations of the state estimate.
FIG. 2 is a circuit diagram, partially in block form, showing the utilization of the arrangement of FIG. 10 in providing an estimate of a measured variable of the process wherein the estimate is a filtered or smoothed value for a process measurement and the smoothing is accomplished by a first order lag.
FIG. 3 is a flow chart showing the several computation steps which must be carried out by a digital computer to compute the variable weighting factor applied to the computation of the first order lag.
FIG. 4 is a circuit diagram for the estimator which would be utilized in one of the arrangements of FIGS. 1a hi to provide an estimate of a process measurement which is undergoing a ramp change with the slope of the ramp also being estimated.
FIG. 5 is a flow chart showing the computations to be made in a digital computer for computing both the estimated value of the variable undergoing the ramp change as well as the slope of the ramp.
DESCRIPTION OF THE PREFERRED EMBODIMENTS In FIG. la there is shown in block diagram an arrangement for computing the state estimate it from the state observations z by means of estimator 10. The state observations z are obtained on line 11 by measurement of certain variables of the process 12, namely, those variables required to determine by virtue of the model forming estimator 10, the state estimate 2 on output line 13.
The process 12 may be considered as being subjected to disturbances which are of an unexpected character or which appear at an unexpected time. Those disturbances may enter the process at its input, its output, or
, at any point in the process itself or they may arise as the result of noise introduced into the instrumentation system involved in obtaining those state observations from process measurements. Changes in the set point of a variable of a process, when those changes are arbitrary in nature, may also be considered as a process distrubance of the type with which this invention is concemed.
The disturbances and the measured process quantities which determine the state observations are as sumed to include one of more arbitrary characteristics such as shifts in steady state values, steps, sign waves, square waves, ramps, or dynamically shaped random noise. In FIG. la there is shown on line 14 a first signal, 6, which may, for example, be an error signal indicative of the deviation of a measured process variable and hence variable about zero in magnitude and direction from its desired value. That signal is suppled by way of line 14 to a computer 16, generally indicated as an error adaptive control computer (EACC) which is later explained in detail and which is capable of producing as an output on line 18, a second signal P, indicative of the probability that the process 12 is being subjected to an unexpected disturbance of the type described above. The probability calculation made by the computer 16 is shown as being a function of the magnitude of the signal 6 and time, t.
The signal on line 18 is supplied to a function generator 20 which is designed to produce as an output on line 22 a signal l/T, which represents the time function of the relationship between the observed states z and the estimate i being calculated in the state estimator 10. It will be noted that the function generator 20 automatically produces the signal, l/T, as a function of the probability P.
The time function l/T is supplied as a signal on line 22 to the estimator to modify the relationship between the state estimate it and the state observations 2 when an unexpected process disturbance occurs, as detected by a variation in the error signal, 6.
Two of the various forms which the state estimator 10 may take are illustrated by the diagrams of FIGS. 2 and 4. Similarly, the characteristic of the function generator 20 will be illustrated and described more completely in connection with the description of FIG. 2, while the specific construction of one form of the computer 16 will be described in connection with FIG. 2.
FIG. lb is a variation of a block diagram of FIG. 1a in which the signal 6 is obtained by a different means. It will be noted that the signal 6 should be what is known as a zero centered signal, or, in other words, it should be variable about zero in accordance with the polarity and magnitude of variations in the state observations z due to the process disturbance. In that connection, FIG. 1b as well as FIGS. 1c and 1d illustrate the various sources of the signal 6 which may be satisfactorily used by the computer 16. In FIG. 1b the signal 6 is computed as a difference between the state observations z obtained from line 11 by way of line and a state estimate it, from line 17, where i, represents an estimate which may be calculated as an intermediate calculation required in the process of computing the state i, as, for example, the computations of i, in FIG.
In FIG. 10 the signal a is determined by comparing the state observations 2 from line 15 with the state estimate )2, obtained by way of line 19 from line 13. The comparison is made by means of comparator 26. This comparison will provide the necessary zero centered signal 6 which will be indicative of the magnitdue and polarity of the error in the state estimate :2 where i is an estimate of the process variable observed as z by a process measurement.
In FIG. 1d the signal 6 is obtained as a part of the computation in the estimator 10. That signal may, for example, be a signal such as the signal ahd I of FIG. 4, as will be described subsequently.
From the various arrangements shown in FIGS. 1a ld, it will be evident that there are a number of sources for the signal 6 and it is only necessary that that signal be a zero centered signal of magnitude and polarity indicative of the effect of a process disturbance on the magnitude and polarity of the estimate error.
In FIG. 2 there is shown an arrangement for computing the state estimate 2 from the state observations 1 when it is desired that the estimated value of the state being observed should represent the value being observed with a reduction of fluctuations in the observed state due to any expected disturbance in the process such as small, random short duration changes in the process characteristic represented by the state being observed. For the purpose of filtering out those insignificant fluctuations in the observed state the estimator 10 of FIG. 2 is shown as providing a first order lag whose time constatnt T is adjustable. Thus, the estimator includes an integrator 30 having a potentiometer 32 in the input circuit which is adjusted by servo 34 through the mechanical linkage 36. The integrator 30 also has another potentiometer 38 which is in the feedback circuit of the integrator 30 and is adjusted by mechanical linkage 360 so that'potentiometers 32 and 38 have like values such that l/T, the reciprocal of the time constant of the lag circuit, will correspond to the input to estimator 10 from line 22.
The estimated state it is compared with the observed state z by comparator 26, shown as a differential amplifier, so that there is produced on line 14 a signal 6 which is a signal variable about zero in a magnitude and direction in accordance with the magnitude and direction of the variations in the state observations in response to disturbances in the process 12. Those changes in the observed state which represent expected disturbances, such as noise, are to be filtered out by the circuit of the estimator 10 so that the estimate it will more accurately represent the characteristic of the process 12 which is represented by the steady state portion of the state observation z.
If the signal 6 does not exceed certain magnitudes for a duration sufficient to indicate that the change in z is the result of a process disturbance which is significant, it is desirable that the inverse of the time constant T of the first order lag circuit forming estimator 10, should be small in magnitude so that there will be a relatively long delay in the estimate a? resulting from changes in the value of z. This will reduce the inaccuracy of the estimate it which might result from the estimate closely following a fluctuating observation responding to a noisy process measurement or a short term and/or small magnitude process distrubance. On the other hand, if e is large or of long duration as a result, for example, of a large or persistent change in z, or both, the
probability that the change is one which the estimate a: should folllow is high as the change is less likely to represent noise or small insignificant disturbances. It is more likely to represent a significant change in the characteristic being observed and hence one which is unexpected.
To provide a means for changing l/T appropriately, FIG. 2 provides an EACC 16 for producing on line 18 a signal representative of the probability that the process distrubance is unexpected and that signal is introduced into a function generator to produce a signal 1/T as set forth above.
The EACC 16 is shown as being an anlog device of the type more fully disclosed in my U. S. Pat. No. 3,419,772 with block 40 corresponding to the three section filter circuit 26 of that patent and the relays 42, 44 and 46 corresponding to relays 46, 56 and 66 respectively, of that patent. Thus, before relay 42 will be energized the value of the signal must exceed the magnitude A K j 6 dt for a predetermined duration T where A is a predetermined magnitude corresponding to the lowest level of the probability'P and K is a constant. P, which is the signal on line 18, is directly related to the probability that the signal 6 does result from a measurement indicating an unexpected disturbance in the process 12.
From the above, it will be evident that the value of the signal P intorduced as an input on line 18 to the function generator 20 will be any one of three discreet magnitudes increasing in accordance with a joint function of the magnitude and duration of e detected by the circuit of block 40. t
The function generator 20 is preferably of a form which will produce a signal l/T between the minimum value l/T and a maximum value M on line 22 which is related to the input signal P on line 18, as shown in FIG. 2. In other words, the value of HT remains constant at a value l/T for values of P up to a certain value P such as 0.5, for example. As P increases from P to 1.0 the value of HT may increase in acordance with the equation for example, where p is a selected exponent and K a selected constant providing the desired characteristic for the relationship between P and l/T. K then equals M l/ T, for the selected maximum M for I /T.
FIG. 3 shows an algorithm in flow chart form for carrying out the computation of the state estimate i in a digital computer so that i has a relationship to the state observation z similar to that accomplished by the circuit of FIG. 2. The first step of the algorithm shown in block 50 is the calculation of e(k), which is the value of e obtained by calculating from observations at the sample period k. This calculation may be by any of a number of the approaches shown in FIGS. 1a 1d. If, for example, e(k) is to be calculated in a manner similar to that shown in FIG. 2, then The result of the calculation called for in block 50 is then used in the calculation called for in block 52, namely,
P(k) =fle'(k), t) where that function may, for example, be calculated as disclosed in U. S. Pat. No. 3,633,009 issued to myself and a coworker, Thomas A. Green, on Jan. 4, 1972, namely, by calculation of the maximum P, for the value e(k), where t? 1 ffitq'i PL and where a is the standard deviation of the statistical model of the noise in the state observation as obtained from chart records of those observations by approximating the value as one eighth of the peak to peak value;
a is the average time between zero crossings of the statistical model of the state observations as may be obtained from chart records by averaging over approximately one hundred zero crossings for proper resolution; and
t, is the duration of the time that e exceeds a deviation 0,.
After calculating P(k) the algorithm proceeds to a calculation of the weighting factor W(k) as shown in block 54 where which function may be W(k) T,/T; for 0 2 P(k) P and which is an exponentially shaped function similar to .ttntshowninbiqskll 9 F G-.. 2 p that it varies tween a minimum of value T,/T; and a inaximum of value equal to K T,/T,. Here T,/T; is an approximation of 1 e s ,which holds for small values of the exponent, such expression being well known as the weighting factor of a first order lag.
T, is the period between samplings made to obtain the state observations and T, is the time constant of the filtering which the particular estimating procedure is intended to estimate.
After computing W(k), as shown in block 54, W(k) is compared with W(k-l the weighting factor calculated at the last sample interval, as shown in block 56 If W(k) is greater than or equal to W(k-l) then the algorithm follows path 58. The program thus proceeds to block 72.
If the answer to the question posed by block 56 is no the program follows path 62 to block 63 where n is set equal to l/W(k). Following block 63 the program proceeds to block 64 where n is compared with K is related to the degree of correlation between present and past samples of the process quantities. It may be approximated by the ratio of the time between sample periods and the average time between zero crossings (T /a). This approximation will reduce to the order of 10 percent the correlation of the samples used in the filter as the weighting factor is decreasing. The value of K is preferably at its upper limit of 1 when samples of the signal can be considered or assumed to be uncorrelated.
After incrementing n in block 70 by K and calculating W the program proceeds by way of path 58 to block 72.
In block 72 the calculation required to determine the state estimate X( k), when the estimate is to be the result of subjecting the observation to a first order lag, is shown as Having computed the state estimate in block 72, the program proceeds to block 74 where, as indicated, W(k) is stored in the memory cell reserved for W(kl) in preparation for the calculations for the next sample period.
Subsequent to the operation called for in block 74 the program exits.
It will be evident from the algorithm represented by the flow chrt of FIG. 3 that with decreasing values of the probability P(k) and the associated decreasing values of W(k), the weighting factor W(k) is a constant T,/T; when P is below P and that weighting produces a value 5c which follows an exponential function of the observation 2. If P(k) is above P the value of W(k) is l /n where n is related to the number of samples so that )2 iso related to 2 as to produce a value which follows a simple average of 2 rather than an exponential function.
When P increases so that W(k) increases then the value of W( k) calculated as a function of P is the one used as the weighting factor W(k) and i is calculated without as much consideration for past values of z. It therefore follows that an increase in the probability P that the process is being subjected to an unexpected disturbance; that is one which should be given a heavy weight in determining 12, results in .12 being determined more by the existing observations and with less consideration for values from previous samples.
In FIG. 4 there is shown another type of estimator which can be utilized in place of the estimator of FIGS. la 1d. The estimator of FIG. 4 is specifically designed as an analog form of a ramp detector. Thus, when the state observation 2 is a is so signal the state estimate 12 produced by the estimator 10 is an estimate of the value of the ramp signal z. The other output of the estimator 10 of FIG. 4, namely, 6, provides an estimate of the slope of the ramp of the observation z.
The analog circuit required in the estimator 10 of FIG. 4 to produce the ramp estimate and the ramp slope estimate includes a circuit for producing a signal at point 80, namely, i, which represents an estimate of the observation after it has been subjected to a first order lag. As is evident from FIG. 4, the first order filter includes the integrator 82 in conjunction with the potentiometer 84 in the input circuit and the potentiometer 86 in the feedback circuit, the arrangement being similar to that of estimator 10 in FIG. 2. The signal 2, is then put through another first order filter which includes an integrator 88 in conjunction with potentiometer 90 in the input circuit and feedback potentiometer 92 in the feedback circuit to provide a signal on line 94, 3 which is introduced as one input to a differential amplifier 96. The other input to the amplifier 96 on line 98 is obtained from ponit 80 and is therefore representative of i,. That latter input is multiplied by 2 in the input circuitry of amplifier 96 so that the amplifier 96 is comparing twice the value of it, with the value of the signal on line 94 to produce on its output line 100 the signal it indicative of the estimate of the state observation z and hence predictive of the ramp signal.
The signal :2 on line 94 is also utilized as one input to the differential amplifier 102, the other input being directly from point 80 along line 104. Thus, the amplifier 102 compares the value of i, with the signal fc on line 94 to provide an output signal on line 106. That output signal is then in turn supplied through the potentiometer 108 as an input to the amplifier 110 which provides the output signal (71 indicative of the estimate of the slope of the ramp signal z.
As shown in FIG. 4 the potentiomteters 84, 86, 90, 92 and 108 are all simultaneously adjusted by servo in response to a change in input signal to the estimator 10 on line 22, representing l/T, the reciprocal of the time constant of the circuit for obtaining the estimate. As shown in FIG. 4 the servo 120 operates the mechanical couplings 122 to the several potentiometers above mentioned to provide the necessary simultaneous adjustment of the potentiometers to match their settings to the input signal supplied on line 22.
The estimator 10 of FIG. 4 may not only be used in the configuration of FIG. la, but is peculiarly adaptable to use in the configuration of FIG. 1b in that the signal 2, required on line 17 of FIG. lb is obtainable from point 80 of FIG. 4. Likewise, the estimator of FIG. 4 is useful in the configuration of FIG. 10 and the configuration of FIG. la. In the configuration of FIG. 1d the signal on line 14 may be obtained directly from the output line of amplifier 110 of FIG. 4, for example. Use in the configuration of FIG. 10 is similar to that shown in FIG. 2.
The estimation carried out by the circuit of FIG. 4, which is essentially a ramp detector, can be executed by a digital computer programmed to follow an algorithm of the type shown in the flow chart of FIG. 5 which includes as a first step the calculation of the signal :(k) as shown in block 150. That step is followed by the calculation of the probability P as a function of both e(k) and time t as shown in block 152. The step shown in block 152 is followed by the calculation of the weighting factor W(k) as a function of the probability P(k) as shown in block 154. The above 3 steps of the algorithm shown in blocks 150, 152 and 154 are similar to the steps shown in blocks 50, 52 and 54 of FIG. 3.
The calculated value W(k) is then tested to determine if it is equal to orgieater than the value obtained as a result of the previous sample, namely, W(k-l) as shown in block 156. If the W(k) is not equal to or greater than W(k-l) the program proceeds to block 157, however, if W(k) is greater than or equal to W(k-1), a calculation is made of the intermediate estimate J'c,(kl) by the following equation as shown in block 158.
where (k) indicates the values obtained as a result of the present sample,
(kl) indicates values obtained at the sample previous to the present sample, and (k2) indicates the values obtained from the sample just prior to (kl Following the calculation shown in block 158 a calculation is made of the value (kl) as shown by the equation shown in block 162, namely,
Following the calculation shown in block 162 the program then goes to line 160.
If the program had proceeded to block 157 instead of going to block 158 then n would have been set equal to 1/W(k) and the program would have proceeded to block 159 where n would be examined to determine if it were greater than or equal to T,/T,. If it were determined to be not greater than or equal to T,/T then n would be incremented by the value K and W(k) would be set equal to 1/n, as shown in block 161, after which the program proceeds to line 160. On the other hand, if n is greater than or equal to T T the program would progress from block 159 to block 163 where W(k) is set equal to T,,/T,, which represents an approximation of the quantity 1 eT s' f, after which the program proceeds to line 160.
Proceeding from line 160 a calculation is made as shown in block 166 of the value Jr,(k) in accordance with the following equation:
That calculation is then followed by the calculation of 12 (k) shown in the block 168, namely:
With the values 12 (k) and 12 (k) the calculation shown in block 170 can be made to obtain one of the outputs, namely, (i representing the slope of the ramp of the state observation z. That calculation, however, is not made if W(k) 1, for in that case 11 is set to zero and the program proceeds to block 172. This will be evident from the use of block 171 to test W(k) for equality with a unity value followed by the setting of (i, to zero in block 169. The calculation shown in block 170 which follows the equation is carried out if W(k) is not equal to one as determined in block 171 and after the coefficient C(k) is determined in block 175 as and is used to give the ramp rate. Conventional values of C(k) can be obtained by reference to H. A. Fertik, Simple Smoothing and Forecasting Methods for Use in Process Control, 1971 JACC, St. Louis, Mo. where it is given as 01/13. For the special case of W(k) l, C(k) has the value 0. It should be noted that when W(k) is fixed at T /T C(k) is approximately l/T, which is comparable to the time constant l/T given by analog embodiment of FIG. 4 on line 22.
After the calculation of [1 in block the program proceeds to calculate the other output 2(k) as shown in block 172 in accordance with the equation:
Following the calculation of the several outputs a series of storage steps must be accomplished. These are all shown in block 174 and involve the storage of i (kl) into the storage cell for ,t,(k2), 22 (k) in the storage cell for )E,(kl and a similar storage for the value of 12 obtained at the different sample intervals as well as the storage of W(k) and z(k) in the respective storage cells for W(k-l) and z(kl After the storage operation indicated in block 174 the program exits having calculated 12(k), the estimated value of z as well as (1, the estimated slope of z.
The algorithm of FIG. 5 can be executed in any of a number of digital computers having a compiler for accepting a Fortran IV program by use of the following program:
: C RAMP DETECTOR REAL I(,N
: DIMENSION T(10), X(l0), SIGMAP(IO) C TUNING VALUES AND PARAMETER KF=l.0
: SIGMA=2.
: ALPHA=200.
: SIGMAP( l 0.
2 SIGMAP(2) 0.5
: SIGMAP(3) 1.0
: SIGMAP(4) 1.5
: SIGMAP(S) 2.0
: SIGMAP(6) 2.5
: SIGMAP(7) 3.0
: SIGMAP(8) 3.5
: SIGMAP(9) 4.0
: SIGMAP( 10) 5.0
25: C DETERMINISTIC PROBABILITY CALCU- LATOR 26: C 27: C INPUT E EVERY TS SECONDS, CALCU- LATE PX 28: 300 CALL CLOCK(1, TM)
29: IF(TM-TS)300,301,301
31: CALL AISCAN(190, E,1, NP) 32: 308 IF(E*EL)309,311,311
33: 309 D0 310 I=l,l0
34: 310 T(I)=O 35: 311 XMAX= 36: DO 320 I=l,l0 37: IF(E-SIGMAP(I)) 315,312,312 38: 312 T(I)=T(I)+TS 39: X(I)=(2.**(SIGMAP(I)/SIGMA))*(T(I)/AL- FHA) 40: lF(X(I)-XMAX)411,411,4OO 41: 400 XMAX=X(I) 42: 411 GO TO 320 43: 315 T(I)=0 44: 320 CONTINUE 46: EL=E 48: C CALCULATION OF WEIGI-ITING FACTOR 49: WK=KF*((PXPC)/(l.PC))**NP+l./TO 50: C ADAPTING WEIGHTING FACTOR 51: IF(WK-WK1)157,157,158
53: IF(N-(TF/TS))161,163,163
56: GO TO 166 57: 163 WK=TSfTF 58: GO TO 166 61: C CALCULATION OF FILTERED VALUE AND SLOPE 64: IF(WK-1) 170,169,170
66: GO TO 17 2 70: C UPDATE STORAGE TERMS 77: C DISPLAY RESULTS THROUGH ANALOG OUTPUTS 78: CALL DACON(I, PX, 2WK, 3XIK, 4, A1)
79: GO TO 300 80: END
In this program it will be evident that instructions through 24 define selected parameters or tunings necessary for calculation of the algorithms contained in the program.
The program beginning with instructions 28 and ending with instruction 79 constitute a sequence of calculations which are to be repeated every TS seconds. To accomplish this, instruction 28 is a subroutine instruction which causes the computer to read the value of a counter TM which is incremented independently by clock 1. The clock increments cell TM every second.
The operation called for by instruction 31 uses a subroutine AISCAN to input a value, E external to the computer connected to input channel 190, setting flag NP when the new value of E is available for use by the program.
The calculation called for in block 152 of FIG. 5 is carried out by instructions 32 through 46, it being assumed that e(k) is available. Those instructions carry out the procedure set forth by the algorithm of FIG. 2 of U. S. Pat. No 3,633,009 to calculate the probability noted as P in the present FIG. 5. Following the calculation of P, instruction 49 carries out the calculation of the weighting factor as shown in block 154 in FIG. 5 which is then followed by instruction 5] which carries out the function of block 156.
Instructions 51 through 58 set up the procedure for following the path through block 157 and 159 and then through either blocks 161 or 163 of FIG. 5.
The program then uses subroutine DACON which converts and outputs variables P or PX, W(k) or WK, 2(k) or XlK, and (2 or Al in the form of voltages for control or display purposes as called for by instruction 78.
It will be evident that the Fortran program for carrying out the procedure set forth in the algorithm of FIG. 3 is a program which is a subset of the program dscribed above for FIG. 5. It is therefore not set forth in detail.
The systems described above for state estimation are useful in application to any of a number of processes such as, for example, routine industrial processes, however, they are of particular advantage when used in connection with the estimation of a state of a power system. For example, consider the arrangement of FIG. la. If the process is a power system the state observation z for example, may be the power flow over a particular tie line of the power system and may be subject to random variations as a result of the normal connection and disconnection of loads in the power system and it may also be subject to large and/or long term changes due to unexpected increases or decreases in the load in the power system. In such an application the state which is estimated x may be the power flow over that particular tie line being observed with the objective being to eliminate the noise from the observed value without eliminating significant changes as by use of an estimator of the type shown in FIGS. 2 and 3. The signal being applied from the process on line 14, namely, 6, may be the computed area control error, that is, a frequency biased tie line deviation measurement as is normally used in power systems as an indication of the deviation of the area load on the system from the total area generation of the system.
Where the estimator 10 is a ramp estimator of the type described in connection with FIGS. 4 and 5, the system of FIG. lb may be used, and in that arrangement the signal 6 on line 14 is no longer the area control error as mentioned above, but is instead a comparison of the state observation 2 with an intermediate estimate 12,.
The arrangements of FIGS. and 1d may likewise be used in connection with power systems in that they are similar to those of FIGS. la and lb except that the zero centered error signal e is obtained in a different way, as.
has been previously explained.
The variation of the weighting factor in a state estimation calculation in accordance with the probability that the process disturbance is unexpected may be extended beyond the simple forms of estimators described above to the more complicated multi-variable state estimators of the type described for example in A Dynamic Estimator for Tracking the State of a Power System, IEEE Transactions on Power Apparatus and Systems, Sept.-Oct., 1970, Vol. 89, pp. 1670-1678 by A'tif S. Debs and Robert E. Larson, the previously referenced article by Debs and Larson.
What is claimed is: 1. In apparatus, the method for automatically estiamting a process state from state observations in processes subject to unexpected disturbances, comprising the steps of:
computing an estimate of the process state in response to past values of said state observations,
producing a first signal variable about zero in magnitude and direction in accordance with the magnitude and direction of variations in the state observations due to process disturbances,
producing a second signal in response to a joint function of both a certain predetermined magnitude exceeded by said first signal and the duration of the period during which said first signal is above said certain predetermined magnitude so that the magnitude of said second signal is indicative of the probability that the present value of the state observations is due to unexpected process disturbances, and
modifying the magnitude of the influence of the past observations on the computed magnitude of said estimate in response to changes in said second signal so that the effect of the magnitude of said past observations on the magnitude of said estimate is increased as said probability decreases.
2. The method of claim 1 in which said first signal is produced solely in response to a measured process variable.
3. The method of claim 1 in which said first signal is produced in response to the difference between said state observations and the computed estimate.
4. The method of claim 1 in which the computing of the estimate includes the correction of a previous estimate by a weighted difference between the previous estimate and the present observation.
5. The method of claim 4 in which the magnitude of the influence of the past observations is established in accordance with said weighting so that for values of said second signal below a certain predetermined value the weighting is a predetermined minimum value and for values of said secondsignal above said predetermined value the weighting increases exponentially to maximum value which corresponds with a maximum probability.
6. The method of claim 1 in which the estimate computed from the observations corresponds to a value expected by subjecting the observations to a first order lag.
7. The method of claim 1 in which the estimate computed from the observations corresponds to a value expected from observations exhibiting the characteristics of a ramp.
8. ln apparatus, the method for automatically estimating a process state from state observations based on sampled process measurements which are subject to unexpected variations, comprising the steps of:
computing an estimate of the process state in response to the value of said observations,
producing a first signal variable about zero in magnitude and direction in accordance with the effect on the magnitude and direction of variations in the state observations resulting from process disturbances,
producing a second signal in response to the maximum value of a joint function of both a plurality of predetermined magnitudes exceeded by said first signal and the duration of the period in which said first signal exceeds said predetermined magnitudes so that the magnitude of said second signal is indicative of the probability that the present sampled values of the state observations are due to unexpected process disturbances, and
modifying the magnitude of the influence of the past observations on the computed magnitude of said estimates in response to changes in said second signal so that the effect of the magnitude of said past observations on the magnitude of said estimate is increased as said probability decreases.
9. The method of claim 8 in which the computation of the estimate is produced by summing the previous estimate and a weighted difference between the present observation and a value corresponding to the previous estimate.
10. The method of claim 8 in which said first signal is produced in response to the difference between the previous estimate and the present observation.
11. The method of claim 8 in which said second signal is produced in accordance with the maximum value of the joint function expressed as where 'Y("Jt) i o, is the standard deviation of the statistical model of the expected noise.
and
a is the average time between zero crossings of the first signal. 12. The method of claim 8 in which the computation of the present estimate of the process state is obtained by summing the previous estimates and a weighted difference between the present observation and a value corresponding to the previous estimate, and in which the change produced in the magnitude of the influence of the past observations is produced by changing said weighting in response to changes in said second signal so that the weighting is increased as said probability is increased.
13. The method of claim 12 in which the change in the magnitude of the influenqe of the past observations is produced by changing said weighting so that for values for said second signal below a certain predetermined value the weighting is a predetermined minimum value and for values for said second signal above said predetermined value the weighting increases exponentially to a maximum value which corresponds with a maximum probability.
14. In apparatus, the method for automatically estimating a process state from state observations determined from sampled process measurements which are subject to unexpected variations, comprising the steps of:
computing the present estimate of the process state by summing the previous estimate and a weighted difference between the present observations and a value corresponding to the previous estimate,
producing a first signal variable about zero in magnitude and direction in accordance with the difference between the previous estimate and the present observations,
producing a second signal in accordance with the maximum value of a joint function of both certain predetermined magnitudes exceeded by said first signal and the duration of time during which that signal is above said certain predetermined magnitudes so that the magnitude of said second signal is indicative of the probability that the present observations are due to unexpected process disturbances, and
modifying the weighting in response to said second signal so that said weighting is increased as said where W(k) is the weighting, P is the second signal and P is the predetermined value of P above which the weighting increases exponentially, and p is the exponent.
16. The method of claim 14 in which said joint function is expressed by the equation where and where a is the standard deviation of the statistical model of the noise in the state observation and a is the average time between zero crossings of the statistical model of said first signal and r is the duration of time that the first signal exceeds a deviation 0-,.
I 3 UNITED STATES PATENT OFFICE 5 CERTIFICATE OF CORRECTION Patient 3,805,032 Dated April 16, 197 1 Inventor(s) Charles W. Ross It is, certified that error appears in the above-identified patent and that said Letters Patent are hereby corrected as shown below:
Col. 1, line 31, "ap proahoes should read approaches- Col. 3, lines Sand 6, "distrubance" should read --di'.sturbance. Col. 3, lines 15 and 16, after "variable" delete "and hence variable about zero in magnitude and I direction" Col. 3, line l7,-after "value" insert--and hence variable about zero in magnitude and direction- Col. 4, line 6, "magnitdue" should read --magnitude-.
line 12-, "ahd 1" should read a line 31, "constatnt" should read ---constant--. Col. 5, line 13, "anlog" should read -analog--'.
line &3, "W111" should read -will---. line &5, "intorduced" should read --i'ntroduced--. Col 6, line 22', "cl" should be deleted. Col. 7, line 6 T. /T should read -T' /'l line 43 "cfirt should read ---ch'rt--. line 50, ":Lso" should read --i.s so 7 line 67 delete "is so" and insert ---ramp---. Col 8 line 21, "ponit" should read --point-. Col. 12, line 19, "insturctions" should read -.-instruction's--.. Col. 13, lines 21 and 22, "estiamting" should read -esti'mating--. Col. 1 line 65.; "influenqe" should read "influence-m. Col. 16, line 8', "increaess" should read ---increases--..
Signed and sealed this 1st day of October 1974,
(SEAL) Attest:
McCOYM. GIBSON JR. C. MARSHALL DANN Attesting Officer Commissioner of Patents
Claims (15)
1. In apparatus, the method for automatically estiamting a process state from state observations in processes subject to unexpected disturbances, comprising the steps of: computing an estimate of the process state in response to past values of said state observations, producing a first signal variable about zero in magnitude and direction in accordance with the magnitude and direction of variations in the state observations due to process disturbances, producing a second signal in response to a joint function of both a certain predetermined magnitude exceeded by said first signal and the duration of the period during which said first signal is above said certain predetermined magnitude so that the magnitude of said second signal is indicative of the probability that the present value of the state observations is due to unexpected process disturbances, and modifying the magnitude of the influence of the past observations on the computed magnitude of said estimate in response to changes in said second signal so that the effect of the magnitude of said past observations on the magnitude of said estimate is increased as said probability decreases.
2. The method of claim 1 in which said first signal is produced solely in response to a measured process variable.
3. The method of claim 1 in which said first signal is produced in response to the difference between said state observations and the computed estimate.
4. The method of claim 1 in which the computing of the estimate includes the correction of a previous estimate by a weighted difference between the previous estimate and the present observation.
5. The method of claim 4 in which the magnitude of the influence of the past observations is established in accordance with said weighting so that for values of said second signal below a certain predetermined value the weighting is a predetermined minimum value and for values of said second signal above said predetermined value the weighting increases exponentially to maximum value which corresponds with a maximum probability.
6. The method of claim 1 in which the estimate computed from the observations corresponds to a value expected by subjecting the observations to a first order lag. 7. The method of claim 1 in which the estimate computed from the observations corresponds to a value expected from observations exhibiting the characteristics of a ramp.
8. In apparatus, the method for automatically estimatinG a process state from state observations based on sampled process measurements which are subject to unexpected variations, comprising the steps of: computing an estimate of the process state in response to the value of said observations, producing a first signal variable about zero in magnitude and direction in accordance with the effect on the magnitude and direction of variations in the state observations resulting from process disturbances, producing a second signal in response to the maximum value of a joint function of both a plurality of predetermined magnitudes exceeded by said first signal and the duration of the period in which said first signal exceeds said predetermined magnitudes so that the magnitude of said second signal is indicative of the probability that the present sampled values of the state observations are due to unexpected process disturbances, and modifying the magnitude of the influence of the past observations on the computed magnitude of said estimates in response to changes in said second signal so that the effect of the magnitude of said past observations on the magnitude of said estimate is increased as said probability decreases.
9. The method of claim 8 in which the computation of the estimate is produced by summing the previous estimate and a weighted difference between the present observation and a value corresponding to the previous estimate.
10. The method of claim 8 in which said first signal is produced in response to the difference between the previous estimate and the present observation.
11. The method of claim 8 in which said second signal is produced in accordance with the maximum value of the joint function expressed as Pi 1 - e ( ,t ) where gamma ( sigma ''i,ti) (2 sigma i/ sigma / Alpha ) ti sigma i is the standard deviation of the statistical model of the expected noise. and Alpha is the average time between zero crossings of the first signal.
12. The method of claim 8 in which the computation of the present estimate of the process state is obtained by summing the previous estimates and a weighted difference between the present observation and a value corresponding to the previous estimate, and in which the change produced in the magnitude of the influence of the past observations is produced by changing said weighting in response to changes in said second signal so that the weighting is increased as said probability is increased.
13. The method of claim 12 in which the change in the magnitude of the influenqe of the past observations is produced by changing said weighting so that for values for said second signal below a certain predetermined value the weighting is a predetermined minimum value and for values for said second signal above said predetermined value the weighting increases exponentially to a maximum value which corresponds with a maximum probability.
14. In apparatus, the method for automatically estimating a process state from state observations determined from sampled process measurements which are subject to unexpected variations, comprising the steps of: computing the present estimate of the process state by summing the previous estimate and a weighted difference between the present observations and a value corresponding to the previous estimate, producing a first signal variable about zero in magnitude and direction in accordance with the difference between the previous estimate and the present observations, producing a second signal in accordance with the maximum value of a joint function of both certain predetermined magnitudes exceeded by said first signal and the duration of time during which that signal is above said certain predetermined magnitudes so that the magnitude of said second signal is indicative of the probability that the present observations are due to unexpected process disturbances, and modifying the wEighting in response to said second signal so that said weighting is increased as said probability is increased.
15. The method of claim 14 in which the change in said weighting in response to changes in said second signal is such that for values for said second signal below a certain predetermined value the weighting is a predetermined minimum value and for values of said second signal above said predetermined value the weighting increaess exponentially to a maximum value in accordance with the equation W(k) K((P(k) -Pc)/(1 - Pc))p+Ts/Tf where W(k) is the weighting, P is the second signal and Pc is the predetermined value of P above which the weighting increases exponentially, and p is the exponent.
16. The method of claim 14 in which said joint function is expressed by the equation Pi 1 - e ( ,t ) where gamma ( sigma ''i,ti) (2 sigma ''i/ sigma / Alpha ) ti and where sigma is the standard deviation of the statistical model of the noise in the state observation and Alpha is the average time between zero crossings of the statistical model of said first signal and ti is the duration of time that the first signal exceeds a deviation sigma i.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US00233131A US3805032A (en) | 1972-03-09 | 1972-03-09 | Method for minimizing the effects of process disturbances on state estimators |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US00233131A US3805032A (en) | 1972-03-09 | 1972-03-09 | Method for minimizing the effects of process disturbances on state estimators |
Publications (1)
Publication Number | Publication Date |
---|---|
US3805032A true US3805032A (en) | 1974-04-16 |
Family
ID=22876001
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US00233131A Expired - Lifetime US3805032A (en) | 1972-03-09 | 1972-03-09 | Method for minimizing the effects of process disturbances on state estimators |
Country Status (1)
Country | Link |
---|---|
US (1) | US3805032A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4612624A (en) * | 1982-10-25 | 1986-09-16 | Mitsubishi Denki Kabushiki Kaisha | Demand estimation apparatus |
US4707779A (en) * | 1984-11-20 | 1987-11-17 | Measurex Corporation | Process for controlling a parameter based upon filtered data |
EP1052558A1 (en) * | 1999-05-14 | 2000-11-15 | Abb Research Ltd. | Method and device for estimation of condition |
US6930633B1 (en) | 1988-03-22 | 2005-08-16 | Raytheon Company | Adaptive glint reduction method and system |
US20070239285A1 (en) * | 2006-04-06 | 2007-10-11 | Good Richard P | Time weighted moving average filter |
US20120271872A1 (en) * | 2009-06-21 | 2012-10-25 | Tom Chau | Procedure for denoising dual-axis swallowing accelerometry signals |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3578957A (en) * | 1969-05-29 | 1971-05-18 | Nasa | Sampled data controller |
US3601588A (en) * | 1966-05-23 | 1971-08-24 | Foxboro Co | Method and apparatus for adaptive control |
US3621217A (en) * | 1969-09-09 | 1971-11-16 | Gulf Research Development Co | Special-purpose process control computer |
US3657524A (en) * | 1970-06-15 | 1972-04-18 | Ibm | Dual mode process control with continuous feedback during coarse mode |
US3665168A (en) * | 1970-12-18 | 1972-05-23 | Gen Electric | Adaptively controlled position prediction system |
-
1972
- 1972-03-09 US US00233131A patent/US3805032A/en not_active Expired - Lifetime
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3601588A (en) * | 1966-05-23 | 1971-08-24 | Foxboro Co | Method and apparatus for adaptive control |
US3578957A (en) * | 1969-05-29 | 1971-05-18 | Nasa | Sampled data controller |
US3621217A (en) * | 1969-09-09 | 1971-11-16 | Gulf Research Development Co | Special-purpose process control computer |
US3657524A (en) * | 1970-06-15 | 1972-04-18 | Ibm | Dual mode process control with continuous feedback during coarse mode |
US3665168A (en) * | 1970-12-18 | 1972-05-23 | Gen Electric | Adaptively controlled position prediction system |
Non-Patent Citations (5)
Title |
---|
Gray, Jr. Completely Adaptive Control of Processing Plants Instr. Soc. of America. Proceedings 1st J.I. Sym. 1965 Montreal 1136 148. * |
Morrison, Norman: Textbook: Introduction to Sequential Smoothing and Prediction 1969 McGraw Hill, Inc. pages 8 and 497 of interest. * |
Perkins et al., Deterministic Peram. Estimation for Near Opt. Feedback Centr. Automatica, Vol. 7, p. 439 444; 1971 1st presentation June 1970. * |
Perreault: Ground Testing of Inertial Navigators to improve Dynamic Performance. IEEE Transaction on Aerospace and Electronic Systems, Vol. AES. 5 No. 3, May 1969, pages 457 462. * |
Woodley: Programming Aspects of Direct Dig. Control. Instr. Soc. of Am. Proceed. 1st J.I. Symp. 1965 Montreal, p. 149 159. * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4612624A (en) * | 1982-10-25 | 1986-09-16 | Mitsubishi Denki Kabushiki Kaisha | Demand estimation apparatus |
US4707779A (en) * | 1984-11-20 | 1987-11-17 | Measurex Corporation | Process for controlling a parameter based upon filtered data |
US6930633B1 (en) | 1988-03-22 | 2005-08-16 | Raytheon Company | Adaptive glint reduction method and system |
EP1052558A1 (en) * | 1999-05-14 | 2000-11-15 | Abb Research Ltd. | Method and device for estimation of condition |
US6801810B1 (en) | 1999-05-14 | 2004-10-05 | Abb Research Ltd. | Method and device for state estimation |
US20070239285A1 (en) * | 2006-04-06 | 2007-10-11 | Good Richard P | Time weighted moving average filter |
US7542880B2 (en) * | 2006-04-06 | 2009-06-02 | Advanced Micro Devices, Inc. | Time weighted moving average filter |
US20120271872A1 (en) * | 2009-06-21 | 2012-10-25 | Tom Chau | Procedure for denoising dual-axis swallowing accelerometry signals |
US8992446B2 (en) * | 2009-06-21 | 2015-03-31 | Holland Bloorview Kids Rehabilitation Hospital | Procedure for denoising dual-axis swallowing accelerometry signals |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Vanlanduit et al. | An automatic frequency domain modal parameter estimation algorithm | |
Kennedy et al. | Power system harmonic analysis using the Kalman filter | |
US4959767A (en) | Parameter estimation technique for closed loop system | |
Payne | An improved technique for transfer function synthesis from frequency response data | |
US3805032A (en) | Method for minimizing the effects of process disturbances on state estimators | |
CN110907702A (en) | Improved dynamic harmonic estimation method and system | |
Kumar et al. | Power system dynamic state estimation using kalman filtering technique | |
CN114583767B (en) | Data-driven wind power plant frequency modulation response characteristic modeling method and system | |
CN113805062B (en) | Online robust self-adaptive identification method for lithium battery equivalent circuit model parameters | |
US3633009A (en) | Automatic joint probability calculation of noise corrupted process measurements | |
Pilipenko et al. | Parametric Identification of Differential-Difference Models of Heat Transfer in One-Dimensional Bodies Based on Kalman Filter Algorithms | |
Adedotun et al. | Modelling and forecasting climate time series with state-space model | |
Fawcett et al. | Methods of numerical differentiation in the analysis of thermodynamic data for the mercury–solution interface | |
Victor et al. | Closed-loop continuous-time model identification with noisy input-output | |
CN107357758B (en) | Polynomial least square regression memorability speed calculation method for positioning information | |
Yan et al. | Remaining useful life prediction of machinery subjected to two-phase degradation process | |
Poliarus et al. | The Features Ofapplication of Normative Documents to Inverse Problemsof Measurement | |
Park | Derivation of a suboptimal linear filter | |
JPH0743602B2 (en) | Adaptive control method | |
CN111784050B (en) | Short-term outdoor temperature prediction method based on particle filter algorithm | |
JPH0454243B2 (en) | ||
Muralter et al. | Analysis tools for multi-exponential energy decay curves in room acoustics | |
Li et al. | State estimation based on generalized gaussian distributions | |
Austin et al. | Statistical model of short-duration precipitation events | |
Reyes et al. | Just-in-time Point Prediction Using a Computationally-efficient Lebesgue-sampling-based Prognostic Method: Application to Battery End-of-discharge Prediction |