Method For Minimizing The Effects Of Process Disturbances On State Estimators

Ross April 16, 1

Patent Grant 3805032

U.S. patent number 3,805,032 [Application Number 05/233,131] was granted by the patent office on 1974-04-16 for method for minimizing the effects of process disturbances on state estimators. This patent grant is currently assigned to Leeds & Northrup Company. Invention is credited to Charles W. Ross.


United States Patent 3,805,032
Ross April 16, 1974
**Please see images for: ( Certificate of Correction ) **

METHOD FOR MINIMIZING THE EFFECTS OF PROCESS DISTURBANCES ON STATE ESTIMATORS

Abstract

In estimation of a process state from observations based upon process measurements there is utilized a signal which is variable about zero and indicative of the magnitude and direction of change in a state observation resulting from process disturbances, then depending upon a combined function of the magnitude and duration of that signal there is produced another signal indicative of the probability that the process is being subjected to an unexpected disturbance. The influence of previous state observations on the estimate, or in other words, the weighting or time constant involved in the calculation of the state estimate, is varied as a function of that probability so that as the probability increases the weighting increases (the time constant decreases) thus giving the previous observations of the process state decreased value in the determination of the present state estimate.


Inventors: Ross; Charles W. (Hatboro, PA)
Assignee: Leeds & Northrup Company (Philadelphia, PA)
Family ID: 22876001
Appl. No.: 05/233,131
Filed: March 9, 1972

Current U.S. Class: 702/181; 703/2; 703/6
Current CPC Class: G05B 13/026 (20130101)
Current International Class: G05B 13/02 (20060101); G05b 013/02 ()
Field of Search: ;235/150.1,150,151,151.1,151.12,151.21 ;444/1 ;318/561,621,636

References Cited [Referenced By]

U.S. Patent Documents
3621217 November 1971 Carr et al.
3665168 May 1972 Canfield
3657524 April 1972 Bakke
3601588 August 1971 Bristol
3578957 May 1971 Gatlin

Other References

Gray, Jr. Completely Adaptive Control of Processing Plants Instr. Soc. of America. Proceedings 1st J.I. Sym. 1965 Montreal 1136-148. .
Woodley: Programming Aspects of Direct Dig. Control. Instr. Soc. of Am. Proceed. 1st J.I. Symp. 1965 Montreal, p. 149-159. .
Perkins et al., Deterministic Peram. Estimation for Near Opt. Feedback Centr. Automatica, Vol. 7, p. 439-444; 1971 1st presentation June 1970. .
Morrison, Norman: Textbook: Introduction to Sequential Smoothing and Prediction 1969 McGraw Hill, Inc. pages 8 and 497 of interest. .
Perreault: Ground Testing of Inertial Navigators to improve Dynamic Performance. IEEE Transaction on Aerospace and Electronic Systems, Vol. AES. 5 No. 3, May 1969, pages 457-462..

Primary Examiner: Gruber; Felix D.
Attorney, Agent or Firm: Miller, Jr.; William G. Mackay; Raymond F.

Claims



1. In apparatus, the method for automatically estiamting a process state from state observations in processes subject to unexpected disturbances, comprising the steps of:

computing an estimate of the process state in response to past values of said state observations,

producing a first signal variable about zero in magnitude and direction in accordance with the magnitude and direction of variations in the state observations due to process disturbances,

producing a second signal in response to a joint function of both a certain predetermined magnitude exceeded by said first signal and the duration of the period during which said first signal is above said certain predetermined magnitude so that the magnitude of said second signal is indicative of the probability that the present value of the state observations is due to unexpected process disturbances, and

modifying the magnitude of the influence of the past observations on the computed magnitude of said estimate in response to changes in said second signal so that the effect of the magnitude of said past observations on the magnitude of said estimate is increased as said probability decreases.

2. The method of claim 1 in which said first signal is produced solely in

3. The method of claim 1 in which said first signal is produced in response to the difference between said state observations and the computed

4. The method of claim 1 in which the computing of the estimate includes the correction of a previous estimate by a weighted difference between the

5. The method of claim 4 in which the magnitude of the influence of the past observations is established in accordance with said weighting so that for values of said second signal below a certain predetermined value the weighting is a predetermined minimum value and for values of said second signal above said predetermined value the weighting increases exponentially to maximum value which corresponds with a maximum

6. The method of claim 1 in which the estimate computed from the observations corresponds to a value expected by subjecting the

7. The method of claim 1 in which the estimate computed from the observations corresponds to a value expected from observations exhibiting

8. In apparatus, the method for automatically estimating a process state from state observations based on sampled process measurements which are subject to unexpected variations, comprising the steps of:

computing an estimate of the process state in response to the value of said observations,

producing a first signal variable about zero in magnitude and direction in accordance with the effect on the magnitude and direction of variations in the state observations resulting from process disturbances,

producing a second signal in response to the maximum value of a joint function of both a plurality of predetermined magnitudes exceeded by said first signal and the duration of the period in which said first signal exceeds said predetermined magnitudes so that the magnitude of said second signal is indicative of the probability that the present sampled values of the state observations are due to unexpected process disturbances, and

modifying the magnitude of the influence of the past observations on the computed magnitude of said estimates in response to changes in said second signal so that the effect of the magnitude of said past observations on the magnitude of said estimate is increased as said probability decreases.

9. The method of claim 8 in which the computation of the estimate is produced by summing the previous estimate and a weighted difference between the present observation and a value corresponding to the previous

10. The method of claim 8 in which said first signal is produced in response to the difference between the previous estimate and the present

11. The method of claim 8 in which said second signal is produced in accordance with the maximum value of the joint function expressed as

P.sub.i = 1 - e.sup.-.sup..gamma.(.sup..sigma..sup.' .sup.,t .sup.)

where

.gamma.(.sigma.'.sub.i,t.sub.i) = (2 .sigma..sub.i /.sigma./.alpha.) t.sub.i

.sigma..sub.i is the standard deviation of the statistical model of the expected noise.

and

.alpha. is the average time between zero crossings of the first signal.

12. The method of claim 8 in which the computation of the present estimate of the process state is obtained by summing the previous estimates and a weighted difference between the present observation and a value corresponding to the previous estimate, and in which

the change produced in the magnitude of the influence of the past observations is produced by changing said weighting in response to changes in said second signal so that the weighting is increased as said

13. The method of claim 12 in which the change in the magnitude of the influenqe of the past observations is produced by changing said weighting so that for values for said second signal below a certain predetermined value the weighting is a predetermined minimum value and for values for said second signal above said predetermined value the weighting increases exponentially to a maximum value which corresponds with a maximum

14. In apparatus, the method for automatically estimating a process state from state observations determined from sampled process measurements which are subject to unexpected variations, comprising the steps of:

computing the present estimate of the process state by summing the previous estimate and a weighted difference between the present observations and a value corresponding to the previous estimate,

producing a first signal variable about zero in magnitude and direction in accordance with the difference between the previous estimate and the present observations,

producing a second signal in accordance with the maximum value of a joint function of both certain predetermined magnitudes exceeded by said first signal and the duration of time during which that signal is above said certain predetermined magnitudes so that the magnitude of said second signal is indicative of the probability that the present observations are due to unexpected process disturbances, and

modifying the weighting in response to said second signal so that said

15. The method of claim 14 in which the change in said weighting in response to changes in said second signal is such that for values for said second signal below a certain predetermined value the weighting is a predetermined minimum value and for values of said second signal above said predetermined value the weighting increaess exponentially to a maximum value in accordance with the equation

W(k) = K[(P(k) - P.sub.c)/(1 - P.sub.c)].sup.p +T.sub.s /T.sub.f

where W(k) is the weighting, P is the second signal and P.sub.c is the predetermined value of P above which the weighting increases

16. The method of claim 14 in which said joint function is expressed by the equation

P.sub.i = 1 - e.sup.-.sup..gamma.(.sup..sigma..sup.' .sup.,t .sup.)

where

.gamma.(.sigma.'.sub.i,t.sub.i) = (2 .sigma.'.sub.i /.sigma./.alpha.) t.sub.i

and where .sigma. is the standard deviation of the statistical model of the noise in the state observation and .alpha. is the average time between zero crossings of the statistical model of said first signal and t.sub.i is the duration of time that the first signal exceeds a deviation .sigma..sub.i.
Description



BACKGROUND OF THE INVENTION

This invention relates to the art of estimating a process state and more particularly to a method for minimizing the introduction of inaccuracies into the estimations as a result of the disturbance of the process in a manner or at a time which is unexpected or unpredictable. A process state may for the purpose of this description be considered as that totality of variables whose values completely define a characteristic of the process or system. The art of estimating may be considered as including the art of filtering, smoothing and predicting as well as that art more specifically identified as estimating or state estimation. The grouping of these arts under the term "estimating" may be seen to have a logical basis considering the fact that filtering and smoothing is essentially the estimation of the average value of the signal being filtered or smoothed while predicting is a form of estimating which utilizes a model for the expected behavior of the quantity to be predicted. State estimation itself is a data processing technique that computes a process state from measurements of process or system variables, using the mathematical model of the system or process and prior knowledge of its various inputs and outputs so that the output of a state estimator approahces the true state of the process. In this connection reference should be had to the following:

Brown, R.G., "Smoothing, Forecasting and Prediction of Discrete Time Series," Prentice Hall, Inc., Englewood Cliffs, N.J., 1962, and

Toyoda, J., Chen, M.S., Inoue, Y. "An Application of State Estimation to Short-Term Load Forecasting, Part I: Forecasting Modeling," IEEE Transactions on Power Apparatus and Systems, Vol. PAS-89, No. 7, pp. 1678-1682, 1970.

Debs, Atif S., Larson, R.E. "A Dynamic Estimator for Tracking the State of a Power System," IEEE Transactions on Power Apparatus and Systems, Sept.-Oct. 1970, Vol. 89, pp. 1670-1678.

If it is assumed that an adequate model is available for estimating the state variables then a major problem requiring solution is the determination of an optimum weighting factor for continually updating from an initial value the estimate of the state without producing gross inaccuracies when the process fails to follow the model because of an unexpected disturbance to the process itself or the measurements of the process variables. For the purpose of this description these disturbances will all be known as process disturbances. Several methods for solving the problem mentioned above have been proposed by others. Those methods include a method which is experimental in nature and a method which involves ramification of the covariance of the state estimation. Such methods have many drawbacks including the difficulty of application of the covariance method if the noise in the process or the measurement cannot be assumed to be white. The present invention overcomes the drawbacks of the prior art as will be evident from the description of the preferred embodiments.

SUMMARY OF THE INVENTION

A method and means for carrying out that method are provided for minimizing the error in an estimated state of a process when that error results from the introduction of an unexpected process disturbance. The method includes automatically computing an estimate of the process state in response to both present and past values of the state observation and the production of a first signal variable about zero in dependence upon the magnitude and direction of variations in the state observations due to the process disturbances. There is produced in response to a joint function of the magnitude and duration of that first signal a second signal which is indicative of the probability that the process is being subjected to an unexpected disturbance. A high value of that probability is, of course, statistically rare in truly random noise. There is then produced a change in the influence of past observations on the computation of the estimated state so that the effect on the estimate of the state due to previous values of the process variable is diminished as the probability that the process is being subjected to an unexpected disturbance increases.

BRIEF DESCRIPTION OF THE DRAWINGS

In the drawings, in which like reference characters refer to like elements:

FIGS. 1a - 1d are block diagrams showing alternate arrangements for obtaining a variation in the time constant applied to the computations of the state estimate.

FIG. 2 is a circuit diagram, partially in block form, showing the utilization of the arrangement of FIG. 1c in providing an estimate of a measured variable of the process wherein the estimate is a filtered or smoothed value for a process measurement and the smoothing is accomplished by a first order lag.

FIG. 3 is a flow chart showing the several computation steps which must be carried out by a digital computer to compute the variable weighting factor applied to the computation of the first order lag.

FIG. 4 is a circuit diagram for the estimator which would be utilized in one of the arrangements of FIGS. 1a - 1d to provide an estimate of a process measurement which is undergoing a ramp change with the slope of the ramp also being estimated.

FIG. 5 is a flow chart showing the computations to be made in a digital computer for computing both the estimated value of the variable undergoing the ramp change as well as the slope of the ramp.

DESCRIPTION OF THE PREFERRED EMBODIMENTS

In FIG. 1a there is shown in block diagram an arrangement for computing the state estimate x from the state observations z by means of estimator 10. The state observations z are obtained on line 11 by measurement of certain variables of the process 12, namely, those variables required to determine by virtue of the model forming estimator 10, the state estimate x on output line 13.

The process 12 may be considered as being subjected to disturbances which are of an unexpected character or which appear at an unexpected time. Those disturbances may enter the process at its input, its output, or at any point in the process itself or they may arise as the result of noise introduced into the instrumentation system involved in obtaining those state observations from process measurements. Changes in the set point of a variable of a process, when those changes are arbitrary in nature, may also be considered as a process distrubance of the type with which this invention is concerned.

The disturbances and the measured process quantities which determine the state observations are assumed to include one of more arbitrary characteristics such as shifts in steady state values, steps, sign waves, square waves, ramps, or dynamically shaped random noise. In FIG. 1a there is shown on line 14 a first signal, .epsilon., which may, for example, be an error signal indicative of the deviation of a measured process variable and hence variable about zero in magnitude and direction from its desired value. That signal is suppled by way of line 14 to a computer 16, generally indicated as an error adaptive control computer (EACC) which is later explained in detail and which is capable of producing as an output on line 18, a second signal P, indicative of the probability that the process 12 is being subjected to an unexpected disturbance of the type described above. The probability calculation made by the computer 16 is shown as being a function of the magnitude of the signal .epsilon. and time, t.

The signal on line 18 is supplied to a function generator 20 which is designed to produce as an output on line 22 a signal 1/T, which represents the time function of the relationship between the observed states z and the estimate x being calculated in the state estimator 10. It will be noted that the function generator 20 automatically produces the signal, 1/T, as a function of the probability P.

The time function 1/T is supplied as a signal on line 22 to the estimator 10 to modify the relationship between the state estimate x and the state observations z when an unexpected process disturbance occurs, as detected by a variation in the error signal, .epsilon..

Two of the various forms which the state estimator 10 may take are illustrated by the diagrams of FIGS. 2 and 4. Similarly, the characteristic of the function generator 20 will be illustrated and described more completely in connection with the description of FIG. 2, while the specific construction of one form of the computer 16 will be described in connection with FIG. 2.

FIG. 1b is a variation of a block diagram of FIG. 1a in which the signal .epsilon. is obtained by a different means. It will be noted that the signal .epsilon. should be what is known as a zero centered signal, or, in other words, it should be variable about zero in accordance with the polarity and magnitude of variations in the state observations z due to the process disturbance. In that connection, FIG. 1b as well as FIGS. 1c and 1d illustrate the various sources of the signal .epsilon. which may be satisfactorily used by the computer 16. In FIG. 1b the signal .epsilon. is computed as a difference between the state observations z obtained from line 11 by way of line 15 and a state estimate x.sub.1 from line 17, where x.sub.1 represents an estimate which may be calculated as an intermediate calculation required in the process of computing the state x, as, for example, the computations of x.sub.1 in FIG. 4, which will be described subsequently. By subtracting the estimate x.sub.1 from the state observation z by means of the comparator 24 there is obtained a desired signal .epsilon. on line 14.

In FIG. 1c the signal .epsilon. is determined by comparing the state observations z from line 15 with the state estimate x, obtained by way of line 19 from line 13. The comparison is made by means of comparator 26. This comparison will provide the necessary zero centered signal .epsilon. which will be indicative of the magnitdue and polarity of the error in the state estimate x where x is an estimate of the process variable observed as z by a process measurement.

In FIG. 1d the signal .epsilon. is obtained as a part of the computation in the estimator 10. That signal may, for example, be a signal such as the signal ahd 1 of FIG. 4, as will be described subsequently.

From the various arrangements shown in FIGS. 1a - 1d, it will be evident that there are a number of sources for the signal .epsilon. and it is only necessary that that signal be a zero centered signal of magnitude and polarity indicative of the effect of a process disturbance on the magnitude and polarity of the estimate error.

In FIG. 2 there is shown an arrangement for computing the state estimate x from the state observations z when it is desired that the estimated value of the state being observed should represent the value being observed with a reduction of fluctuations in the observed state due to any expected disturbance in the process such as small, random short duration changes in the process characteristic represented by the state being observed. For the purpose of filtering out those insignificant fluctuations in the observed state the estimator 10 of FIG. 2 is shown as providing a first order lag whose time constatnt T is adjustable. Thus, the estimator includes an integrator 30 having a potentiometer 32 in the input circuit which is adjusted by servo 34 through the mechanical linkage 36. The integrator 30 also has another potentiometer 38 which is in the feedback circuit of the integrator 30 and is adjusted by mechanical linkage 36a so that potentiometers 32 and 38 have like values such that 1/T, the reciprocal of the time constant of the lag circuit, will correspond to the input to estimator 10 from line 22.

The estimated state x is compared with the observed state z by comparator 26, shown as a differential amplifier, so that there is produced on line 14 a signal .epsilon. which is a signal variable about zero in a magnitude and direction in accordance with the magnitude and direction of the variations in the state observations in response to disturbances in the process 12. Those changes in the observed state which represent expected disturbances, such as noise, are to be filtered out by the circuit of the estimator 10 so that the estimate x will more accurately represent the characteristic of the process 12 which is represented by the steady state portion of the state observation z.

If the signal .epsilon. does not exceed certain magnitudes for a duration sufficient to indicate that the change in z is the result of a process disturbance which is significant, it is desirable that the inverse of the time constant T of the first order lag circuit forming estimator 10, should be small in magnitude so that there will be a relatively long delay in the estimate x resulting from changes in the value of z. This will reduce the inaccuracy of the estimate x which might result from the estimate closely following a fluctuating observation responding to a noisy process measurement or a short term and/or small magnitude process distrubance. On the other hand, if .epsilon. is large or of long duration as a result, for example, of a large or persistent change in z, or both, the probability that the change is one which the estimate x should folllow is high as the change is less likely to represent noise or small insignificant disturbances. It is more likely to represent a significant change in the characteristic being observed and hence one which is unexpected.

To provide a means for changing 1/T appropriately, FIG. 2 provides an EACC 16 for producing on line 18 a signal representative of the probability that the process distrubance is unexpected and that signal is introduced into a function generator 20 to produce a signal 1/T as set forth above.

The EACC 16 is shown as being an anlog device of the type more fully disclosed in my U. S. Pat. No. 3,419,772 with block 40 corresponding to the three section filter circuit 26 of that patent and the relays 42, 44 and 46 corresponding to relays 46, 56 and 66 respectively, of that patent. Thus, before relay 42 will be energized the value of the signal must exceed the magnitude A.sub.1 - K.intg..epsilon. dt for a predetermined duration T.sub.1 where A.sub.1 is a predetermined magnitude corresponding to the lowest level of the probability P and K is a constant. P, which is the signal on line 18, is directly related to the probability that the signal .epsilon. does result from a measurement indicating an unexpected disturbance in the process 12.

Relay 44 is energized when the value of .epsilon. exceeds the magnitude A.sub.2 - K.intg..epsilon. dt for a predetermined time T.sub.2 where A.sub.2 is greater than A.sub.1. Similarly, relay 46 is energized when the value of .epsilon. exceeds the magnitude A.sub.3 - K.intg..epsilon. dt for a predetermined time T.sub.3 where A.sub.3 is greater than A.sub.2.

To provide an appropriate signal P on line 18 the slidewire 48 has a tap 48a set to provide a potential on line 18 which represents the value of P related to a signal .epsilon. exceeding A.sub.1 - K.intg..epsilon. dt for the period T.sub.1 and hence actuating relay 42 to connect tap 48a and line 18 by way of contact 42a. The slide-wire 48 is supplied by a voltage source E, shown as a battery; and the tap 48a is adjusted to a lower potential point on the slidewire than is contact 48b which provides the signal P when relay 44 is energized. Similarly, energization of relay 46 wlll provide a signal P equal to E.

From the above, it will be evident that the value of the signal P intorduced as an input on line 18 to the function generator 20 will be any one of three discreet magnitudes increasing in accordance with a joint function of the magnitude and duration of .epsilon. detected by the circuit of block 40.

The function generator 20 is preferably of a form which will produce a signal 1/T between the minimum value 1/T.sub.o and a maximum value M on line 22 which is related to the input signal P on line 18, as shown in FIG. 2. In other words, the value of 1/T remains constant at a value 1/T.sub. o for values of P up to a certain value P.sub.c such as 0.5, for example. As P increases from P.sub.c to 1.0 the value of 1/T may increase in acordance with the equation

1/T = K [(P - P.sub.c)/(1 - P.sub.c)].sup.p + 1/T.sub.o

for example, where p is a selected exponent and K a selected constant providing the desired characteristic for the relationship between P and 1/T. K then equals M - 1/ T.sub.o for the selected maximum M for 1/T.

FIG. 3 shows an algorithm in flow chart form for carrying out the computation of the state estimate x in a digital computer so that x has a relationship to the state observation z similar to that accomplished by the circuit of FIG. 2. The first step of the algorithm shown in block 50 is the calculation of .epsilon.(k), which is the value of .epsilon. obtained by calculating from observations at the sample period k. This calculation may be by any of a number of the approaches shown in FIGS. 1a - 1d. If, for example, .epsilon.(k) is to be calculated in a manner similar to that shown in FIG. 2, then

.epsilon.(k) = z(k) - x(k)

The result of the calculation called for in block 50 is then used in the calculation called for in block 52, namely,

P(k) = f(.epsilon.(k), t) where that function may, for example, be calculated as disclosed in U. S. Pat. No. 3,633,009 issued to myself and a coworker, Thomas A. Green, on Jan. 4, 1972, namely, by calculation of the maximum P.sub.i for the value .epsilon.(k), where

P.sub.i = 1 - e.sup.-.sup..gamma.(.sup..alpha..sup.'.sub.i, t.sub.i)

and cl .gamma.(.sigma.'.sub.i,t.sub.i) = (2 .sigma.'.sub.i /.sigma./.alpha.) t.sub.i

and where

.sigma. is the standard deviation of the statistical model of the noise in the state observation as obtained from chart records of those observations by approximating the value as one eighth of the peak to peak value;

.alpha. is the average time between zero crossings of the statistical model of the state observations as may be obtained from chart records by averaging over approximately one hundred zero crossings for proper resolution; and

t.sub.i is the duration of the time that .epsilon. exceeds a deviation .sigma..sub.i.

After calculating P(k) the algorithm proceeds to a calculation of the weighting factor W(k) as shown in block 54 where

W(k) = f[P(k)]

which function may be

W(k) = T.sub.s /T.sub.f for 0 .gtoreq. P(k) < P.sub.c

and

W(k) = K [(P(k) - P.sub.c)/(1 - P.sub.c)].sup.p + T.sub.s /T.sub.f for P(k) .gtoreq. P.sub.c

which is an exponentially shaped function similar to that shown in block 20 of fIG. 2 except that it varies between a minimum of value T.sub.s /T.sub.f and a maximum of value equal to K + T.sub.s /T.sub.f. Here T.sub.s /T.sub.f is an approximation of 1 - e.sup.-.sup.T s.sup./T.sub.f which holds for small values of the exponent, such expression being well known as the weighting factor of a first order lag.

T.sub.s is the period between samplings made to obtain the state observations and T.sub.f is the time constant of the filtering which the particular estimating procedure is intended to estimate.

After computing W(k), as shown in block 54, W(k) is compared with W(k-1), the weighting factor calculated at the last sample interval, as shown in block 56. If W(k) is greater than or equal to W(k-1) then the algorithm follows path 58. The program thus proceeds to block 72.

If the answer to the question posed by block 56 is "no" the program follows path 62 to block 63 where n is set equal to 1/W(k). Following block 63 the program proceeds to block 64 where n is compared with T.sub.f /T.sub.s to see if n is greater than or equal to T.sub.f /T.sub.s.

If the answer to the question posed by the block 64 is "yes" then the program leads to block 66 where W(k) is calculated as follows:

for T.sub.s /T.sub.s >> 1, W(k) = T.sub.s /T.sub.f

otherwise,

W(k) = 1-e.sup.-.sup.T s.sup./T f

The program then proceeding to the path 58.

If the answer to the question posed by block 64 is "no" then the program proceeds to block 70 where the following calculations are made.

n = n + K.sub.c

W(k) = 1/n

K.sub.c is related to the degree of correlation between present and past samples of the process quantities. It may be approximated by the ratio of the time between sample periods and the average time between zero crossings (T.sub.s /.alpha.). This approximation will reduce to the order of 10 percent the correlation of the samples used in the filter as the weighting factor is decreasing. The value of K.sub.c is preferably at its upper limit of 1 when samples of the signal can be considered or assumed to be uncorrelated.

After incrementing n in block 70 by K.sub.c and calculating W the program proceeds by way of path 58 to block 72.

In block 72 the calculation required to determine the state estimate x(k), when the estimate is to be the result of subjecting the observation to a first order lag, is shown as

x(k) = x(k-1) - W(k) [x(k-1) - z]

Having computed the state estimate in block 72, the program proceeds to block 74 where, as indicated, W(k) is stored in the memory cell reserved for W(k-1) in preparation for the calculations for the next sample period.

Subsequent to the operation called for in block 74 the program exits.

It will be evident from the algorithm represented by the flow chrt of FIG. 3 that with decreasing values of the probability P(k) and the associated decreasing values of W(k), the weighting factor W(k) is a constant T.sub.s /T.sub.f when P is below P.sub.c, and that weighting produces a value x which follows an exponential function of the observation z. If P(k) is above P.sub.c the value of W(k) is 1/n where n is related to the number of samples so that x iso related to z as to produce a value which follows a simple average of z rather than an exponential function.

When P increases so that W(k) increases then the value of W(k) calculated as a function of P is the one used as the weighting factor W(k) and x is calculated without as much consideration for past values of z. It therefore follows that an increase in the probability P that the process is being subjected to an unexpected disturbance; that is one which should be given a heavy weight in determining x, results in x being determined more by the existing observations and with less consideration for values from previous samples.

In FIG. 4 there is shown another type of estimator which can be utilized in place of the estimator 10 of FIGS. 1a - 1d. The estimator of FIG. 4 is specifically designed as an analog form of a ramp detector. Thus, when the state observation z is a is so signal the state estimate x produced by the estimator 10 is an estimate of the value of the ramp signal z. The other output of the estimator 10 of FIG. 4, namely, a.sub.1 provides an estimate of the slope of the ramp of the observation z.

The analog circuit required in the estimator 10 of FIG. 4 to produce the ramp estimate and the ramp slope estimate includes a circuit for producing a signal at point 80, namely, x.sub.1 which represents an estimate of the observation after it has been subjected to a first order lag. As is evident from FIG. 4, the first order filter includes the integrator 82 in conjunction with the potentiometer 84 in the input circuit and the potentiometer 86 in the feedback circuit, the arrangement being similar to that of estimator 10 in FIG. 2. The signal x.sub.1 is then put through another first order filter which includes an integrator 88 in conjunction with potentiometer 90 in the input circuit and feedback potentiometer 92 in the feedback circuit to provide a signal on line 94, x.sub.2, which is introduced as one input to a differential amplifier 96. The other input to the amplifier 96 on line 98 is obtained from ponit 80 and is therefore representative of x.sub.1. That latter input is multiplied by 2 in the input circuitry of amplifier 96 so that the amplifier 96 is comparing twice the value of x.sub.1 with the value of the signal on line 94 to produce on its output line 100 the signal x indicative of the estimate of the state observation z and hence predictive of the ramp signal.

The signal x.sub.2 on line 94 is also utilized as one input to the differential amplifier 102, the other input being directly from point 80 along line 104. Thus, the amplifier 102 compares the value of x.sub.1 with the signal x.sub.2 on line 94 to provide an output signal on line 106. That output signal is then in turn supplied through the potentiometer 108 as an input to the amplifier 110 which provides the output signal a.sub.1 indicative of the estimate of the slope of the ramp signal z.

As shown in FIG. 4 the potentiomteters 84, 86, 90, 92 and 108 are all simultaneously adjusted by servo 120 in response to a change in input signal to the estimator 10 on line 22, representing 1/T, the reciprocal of the time constant of the circuit for obtaining the estimate. As shown in FIG. 4 the servo 120 operates the mechanical couplings 122 to the several potentiometers above mentioned to provide the necessary simultaneous adjustment of the potentiometers to match their settings to the input signal supplied on line 22.

The estimator 10 of FIG. 4 may not only be used in the configuration of FIG. 1a, but is peculiarly adaptable to use in the configuration of FIG. 1b in that the signal x.sub.1 required on line 17 of FIG. 1b is obtainable from point 80 of FIG. 4. Likewise, the estimator of FIG. 4 is useful in the configuration of FIG. 1c and the configuration of FIG. 1d. In the configuration of FIG. 1d the signal on line 14 may be obtained directly from the output line of amplifier 110 of FIG. 4, for example. Use in the configuration of FIG. 1c is similar to that shown in FIG. 2.

The estimation carried out by the circuit of FIG. 4, which is essentially a ramp detector, can be executed by a digital computer programmed to follow an algorithm of the type shown in the flow chart of FIG. 5 which includes as a first step the calculation of the signal .epsilon.(k) as shown in block 150. That step is followed by the calculation of the probability P as a function of both .epsilon.(k) and time t as shown in block 152. The step shown in block 152 is followed by the calculation of the weighting factor W(k) as a function of the probability P(k) as shown in block 154. The above 3 steps of the algorithm shown in blocks 150, 152 and 154 are similar to the steps shown in blocks 50, 52 and 54 of FIG. 3.

The calculated value W(k) is then tested to determine if it is equal to or greater than the value obtained as a result of the previous sample, namely, W(k-1) as shown in block 156. If the W(k) is not equal to or greater than W(k-1) the program proceeds to block 157, however, if W(k) is greater than or equal to W(k-1), a calculation is made of the intermediate estimate x.sub.1 (k-1) by the following equation as shown in block 158.

x.sub.1 (k-1) = x.sub.1 (k-2) + W(k)[z(k-1) - x.sub.1 (k-2)]

where (k) indicates the values obtained as a result of the present sample,

(k-1) indicates values obtained at the sample previous to the present sample,

and (k-2) indicates the values obtained from the sample just prior to (k-1).

Following the calculation shown in block 158 a calculation is made of the value x.sub.2 (k-1) as shown by the equation shown in block 162, namely,

x.sub.2 (k-1) = x.sub.2 (k-2) + W(k)[x.sub.1 (k-1) - x.sub.2 (k-2)].

Following the calculation shown in block 162 the program then goes to line 160.

If the program had proceeded to block 157 instead of going to block 158 then n would have been set equal to 1/W(k) and the program would have proceeded to block 159 where n would be examined to determine if it were greater than or equal to T.sub.f /T.sub.s. If it were determined to be not greater than or equal to T.sub.f /T.sub.s then n would be incremented by the value K.sub.c and W(k) would be set equal to 1/n, as shown in block 161, after which the program proceeds to line 160. On the other hand, if n is greater than or equal to T.sub.f / T.sub.s, the program would progress from block 159 to block 163 where W(k) is set equal to T.sub.s /T.sub.f, which represents an approximation of the quantity 1 - e.sup.-.sup.T s.sup./T f, after which the program proceeds to line 160.

Proceeding from line 160 a calculation is made as shown in block 166 of the value x.sub.1 (k) in accordance with the following equation:

x.sub.1 (k) = x.sub.1 (k-1) + W(k)[z(k) - x.sub.1 (k-1)]

That calculation is then followed by the calculation of x.sub.2 (k) shown in the block 168, namely:

x.sub.2 (k) = x.sub.2 (k-1) + W(k)[x.sub.1 (k) - x.sub.2 (k-1) ]

With the values x.sub.1 (k) and x.sub.2 (k) the calculation shown in block 170 can be made to obtain one of the outputs, namely, a.sub.1 representing the slope of the ramp of the state observation z. That calculation, however, is not made if W(k) = 1, for in that case a.sub.1 is set to zero and the program proceeds to block 172. This will be evident from the use of block 171 to test W(k) for equality with a unity value followed by the setting of a.sub.1 to zero in block 169. The calculation shown in block 170 which follows the equation

a.sub.1 = C(k)[x.sub.1 (k) - x.sub.2 (k)]

is carried out if W(k) is not equal to one as determined in block 171 and after the coefficient C(k) is determined in block 175 as

C(k) = W(k)/T.sub.s /1 - W(k)

and is used to give the ramp rate. Conventional values of C(k) can be obtained by reference to

H. a. fertik, "Simple Smoothing and Forecasting Methods for Use in Process Control," 1971, JACC, St. Louis, Mo.

where it is given as .alpha./.beta.. For the special case of W(k) = 1, C(k) has the value 0. It should be noted that when W(k) is fixed at T.sub.s /T.sub.f, C(k) is approximately 1/T.sub.f which is comparable to the time constant 1/T given by analog embodiment of FIG. 4 on line 22.

After the calculation of a.sub.1 in block 170 the program proceeds to calculate the other output x(k) as shown in block 172 in accordance with the equation:

x(k) = 2x.sub.1 (k) - x.sub.2 (k)

Following the calculation of the several outputs a series of storage steps must be accomplished. These are all shown in block 174 and involve the storage of x.sub.1 (k-1) into the storage cell for x.sub.1 (k-2), x.sub.1 (k) in the storage cell for x.sub.1 (k-1), and a similar storage for the value of x.sub.2 obtained at the different sample intervals as well as the storage of W(k) and z(k) in the respective storage cells for W(k-1) and z(k-1). After the storage operation indicated in block 174 the program exits having calculated x(k), the estimated value of z as well as a, the estimated slope of z.

The algorithm of FIG. 5 can be executed in any of a number of digital computers having a compiler for accepting a Fortran IV program by use of the following program:

1: C RAMP DETECTOR

2: c

3: real k,n

4: dimension t(10), x(10), sigmap(10)

5: c tuning values and parameter

6: kf=1.0

7: pc=0.6

8: np=2.

9: to=.2

10: ts=20.

11: sigma=2..sup.5

12: alpha=200.

13: sigmap(1)= 0.

14: sigmap(2) = 0.5

15: sigmap(3) = 1.0

16: sigmap(4) = 1.5

17: sigmap(5) = 2.0

18: sigmap(6) = 2.5

19: sigmap(7) = 3.0

20: sigmap(8) = 3.5

21: sigmap(9) = 4.0

22: sigmap(10) = 5.0

23: tf=30

24: k=.1

25: c deterministic probability calculator

26: c

27: c input e every ts seconds, calculate px

28: 300 call clock(1, tm)

29: if(tm-ts)300,301,301

30: 301 tm=0

31: call aiscan(190, e,1, np)

32: 308 if(e*el)309,311,311

33: 309 do 310 i=1,10

34: 310 t(i)=0

35: 311 xmax=0

36: do 320 i=1,10

37: if(e-sigmap(i)) 315,312,312

38: 312 t(i)=t(i)+ts

39: x(i)=(2.**(sigmap(i)/sigma))*(t(i)/alpha)

40: if(x(i)-xmax)411,411,400

41: 400 xmax=x(i)

42: 411 go to 320

43: 315 t(i)=0

44: 320 continue

45: px=1.-exp(-xmax)

46: el=e

47: c

48: c calculation of weighting factor

49: wk=kf*((px-pc)/(1.-pc))**np+1./to

50: c adapting weighting factor

51: if(wk-wk1)157,157,158

52: 157 n=1./wk

53: if(n-(tf/ts))161,163,163

54: 161 n=n+k

55: wk=1./n

56: go to 166

57: 163 wk=ts/tf

58: go to 166

59: 158 x1k1=x1k2+wk*(zk1-x1k2)

60: x2k1=x2k2+wk*(w1k1-x2k2)

61: c calculation of filtered value and slope

62: 166 x1k=x1k1+wk*(zk-x1k1)

63: x2k=x2k1+wk*(x1k-x2k1)

64: if(wk-1) 170,169,170

65: 169 a1=0

66: go to 172

67: 170 ck=wk/((1.-wk)*ts)

68: a1=(x1k-x2k)/ck

69: 172 xk=2.*(x1k-x2k)

70: c update storage terms

71: x1k2=x1k1

72: x1k1=x1k

73: x2k2=x2k1

74: x2k1=x2k

75: wk1=wk

76: zk1=zk

77: c display results through analog outputs

78: call dacon(1, px, 2wk, 3x1k, 4, a1)

79: go to 300

80: end

in this program it will be evident that instructions 5 through 24 define selected parameters or "tunings" necessary for calculation of the algorithms contained in the program.

The program beginning with instructions 28 and ending with instruction 79 constitute a sequence of calculations which are to be repeated every TS seconds. To accomplish this, instruction 28 is a subroutine instruction which causes the computer to read the value of a counter TM which is incremented independently by clock 1. The clock increments cell TM every second.

The operation called for by instruction 31 uses a subroutine AISCAN to input a value, E external to the computer connected to input channel 190, setting flag NP when the new value of E is available for use by the program.

The calculation called for in block 152 of FIG. 5 is carried out by instructions 32 through 46, it being assumed that .epsilon.(k) is available. Those instructions carry out the procedure set forth by the algorithm of FIG. 2 of U. S. Pat. No . 3,633,009 to calculate the probability noted as P in the present FIG. 5. Following the calculation of P, instruction 49 carries out the calculation of the weighting factor as shown in block 154 in FIG. 5 which is then followed by instruction 51 which carries out the function of block 156.

Instructions 51 through 58 set up the procedure for following the path through block 157 and 159 and then through either blocks 161 or 163 of FIG. 5.

Instructions 59 and 60 call for the calculations in blocks 158 and 162 of FIG. 5, respectively. Those calculations are followed by the calculations called for in blocks 166 and 168 as set forth in insturctions 62 and 63 which determine the filtered values x.sub.1 (k) and x.sub.2 (k).

Instruction 64 sets up the procedure for branching from block 171 to either instruction 65 or 67 for determining the slope of the ramp a.sub.1. Those instructions are followed by instruction 69 which calls for the calculation of the estimated value of the ramp as set forth in block 172. Subsequently, instructions 71 through 76 update the storage terms as set forth in block 174.

The program then uses subroutine DACON which converts and outputs variables P or PX, W(k) or WK, x(k) or X1K, and a.sub.1 or A1 in the form of voltages for control or display purposes as called for by instruction 78.

It will be evident that the Fortran program for carrying out the procedure set forth in the algorithm of FIG. 3 is a program which is a subset of the program dscribed above for FIG. 5. It is therefore not set forth in detail.

The systems described above for state estimation are useful in application to any of a number of processes such as, for example, routine industrial processes, however, they are of particular advantage when used in connection with the estimation of a state of a power system. For example, consider the arrangement of FIG. 1a. If the process is a power system the state observation z for example, may be the power flow over a particular tie line of the power system and may be subject to random variations as a result of the normal connection and disconnection of loads in the power system and it may also be subject to large and/or long term changes due to unexpected increases or decreases in the load in the power system. In such an application the state which is estimated x may be the power flow over that particular tie line being observed with the objective being to eliminate the noise from the observed value without eliminating significant changes as by use of an estimator of the type shown in FIGS. 2 and 3. The signal being applied from the process on line 14, namely, .epsilon., may be the computed area control error, that is, a frequency biased tie line deviation measurement as is normally used in power systems as an indication of the deviation of the area load on the system from the total area generation of the system.

Where the estimator 10 is a ramp estimator of the type described in connection with FIGS. 4 and 5, the system of FIG. 1b may be used, and in that arrangement the signal .epsilon. on line 14 is no longer the area control error as mentioned above, but is instead a comparison of the state observation z with an intermediate estimate x.sub.1.

The arrangements of FIGs. 1c and 1d may likewise be used in connection with power systems in that they are similar to those of FIGS. 1a and 1b except that the zero centered error signal .epsilon. is obtained in a different way, as has been previously explained.

The variation of the weighting factor in a state estimation calculation in accordance with the probability that the process disturbance is unexpected may be extended beyond the simple forms of estimators described above to the more complicated multi-variable state estimators of the type described for example in "A Dynamic Estimator for Tracking the State of a Power System," IEEE Transactions on Power Apparatus and Systems, Sept.-Oct., 1970, Vol. 89, pp. 1670-1678 by Atif S. Debs and Robert E. Larson, the previously referenced article by Debs and Larson.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed