Sigurd Skogestad. Work on self-optimizing control

S. Skogestad ``Plantwide control: the search for the self-optimizing control structure'', J. Proc. Control, 10, 487-507 (2000).


For related work on plantwide control see here

Self-optimizing control is related to selecting the right economic controlled variables (CVs) and by this to move more of the burden of economic optimization from the slower time scale of the real-time optimization layer (RTO) into the faster setpoint control layer. Ideally, the setpoint of self-optimizing variable is independent of disturbances, but this in practice some adjustment may be needed on a slower time scale. This discussion about reducing setpoint changes for CVs is related to the unconstrained degrees of freedom (for example, a trade-off between too much or too little recycle), but usually most of the CVs are active constraints where the setpoint is fixed by specifications (for example, a maximum allowed impurity in a product). One may view the active constraints and the "obvious self-optimizing variables", because it is clear that tight control of these variables (with a small back-off) provide more optimal operation. The active constraints may change depending on operation, and for this the conventional approach is to use selectors. The selector approach often works very well but it depends on selecting appropriate MV-CV pairings, and there may be more complex cases where centralized optimization (RTO) is needed. In todays situation with more renewable energy sources and more flexible operation, the self-optimizing approach has a large potential. Moving more of the optimization into the fast control layer, allows for much faster adjustment to changing conditions, for example, for cases where power prices vary and make it necessary to make frequent changes in operation. If the optimal setpoints of the unconstrained CVs vary, then one approach may be to use models (or data) to predetermine optimal setpoints as a function of expected disturbances, including price variations.

What should we control?

Self-optimizing control is a strategy for selecting controlled variables. It is distinguished by the fact that an economic objective function is adopted as a selection criterion. Thus, "self-optimizing" control is the effective translation of economic objectives into control objectives. First, we should control the active constraints (which are optimal from an economic point of view in terms of minimizing the cost). Any deviation from the active constraints (denoted "back-off") gives a loss. These may be viewed as the obvious self-optimizing variables.

Second, we need to find controlled variables associated with the unconstrained degrees of freedom. These are the less obvious self-optimizing control variables. We are looking for some "magic" variables, which when kept constant, indirectly achieves optimal operation in spite of disturbances.

Self-optimizing control

More generally, the following definition of self-optimizing control is probably useful.

"Self-optimizing control is when acceptable operation under all conditions is achieved with constant setpoints for the controlled variables."

Here "acceptable operation" is more precisely defined by the value of the loss, and "under all conditions" means for the defined disturbances, plant changes and implementation errors.

To include biological system the term "self-optimizing control" should possibly be broadened further, for example, by replacing "with constant setpoints for the controlled variables" by "by controlling the right variables" or something similar.

The main issues in selecting controlled variables are disturbances and implementation error (noise, measurement error). All results below are based on a steady-state analysis, since the economics of most processes are determined mainly by the steady.state behavior, but the extension to batch processes (with optimal trajectories) is simple.

A survey of self-optimizing control was published in 2017:

  • J. Jäschke, Y. Cao and V. Kariwala, ``Self-optimizing control - A survey'', Annual Reviews in Control, (2017).

    A good introduction to self-optimizing control, with lots of simple examples, is the following paper:

  • S. Skogestad, ``Near-optimal operation by self-optimizing control: From process control to marathon running and business systems'', Computers and Chemical Engineering, 29 (1), 127-137 (2004).
  • Corresponding slides from the PSE-conference (2004)

  • 1. Linear measurement combinations as controlled variables (optimal local methods)

    A. Nullspace method (optimal with no messurement noise)

    An extremely simple method ("nullspace method") has been derived by Vidar Alstad which gives the optimal linear measurement combination c=Hy (with zero loss) for the case with no implementation error (i.e., noise free-case, n=0). It is briefly descibed in the first paper, and more details can be found here

  • V. Alstad and S. Skogestad, ``Null Space Method for Selecting Optimal Measurement Combinations as Controlled Variables'', Ind.Eng.Chem.Res, 46 (3), 846-853 (2007).
  • B. Optimal combination (Exact local method)

    The nullspace method neglects implementation error, and originally I thought a numerical search was required to find the optimal combination H for the case with implentation error included. However, a trick m ay use to turn the seeming nonvex optimization problem into a constrained QP problem. From this an explicit formula may be derived, see

  • V. Alstad, S. Skogestad and E.S. Hori, ``Optimal measurement combinations as controlled variables'', Journal of Process Control, Vol.19, 128-148 (2009).
  • 2. The ideal global self-optimizating variable: The Gradient

    Not considering measurement noise, the ideal "global" self-optimizing variable is the gradient Ju. In this case, we do not have to find the optimal setpoint, as we do for all the other methods. The first to mention this method was Halvorsen:
  • I.J. Halvorsen, S. Skogestad, Indirect on-line optimization through setpoint control, in: AIChE 1997 Annual Meeting, Los Angeles; paper 194h.
  • I.J. Halvorsen, S. Skogestad, J.C. Morud, V. Alstad, Optimal selection of controlled variables, Industrial & Engineering Chemistry Research 42 (14) (2003) 3273-3284 .

    3. Direct loss evaluation method (brute force method)

    With this method we can evaluate any proposed variable, including single measurements, and also include nonlinearity and feasibility for larger disturbances. This paper defines the issues more carefully and describes the "brute-force" approach (direct loss evaluation) for selecting self-optimizing controlled variables:

  • S. Skogestad, ``Plantwide control: the search for the self-optimizing control structure'', J. Proc. Control, 10, 487-507 (2000).
  • The following paper applies the "brute-force approach" to the Tennessee-Eastman challenge problem and discusses in particular the feasibility issue (which, by the way, a local method will not detect)

  • T. Larsson, K. Hestetun, E. Hovland, and S. Skogestad, ``Self-Optimizing Control of a Large-Scale Plant: The Tennessee Eastman Process'', Ind. Eng. Chem. Res., 40 (22), 4889-4901 (2001).
  • The following paper discusses in more detail the issue of back-off and also shows that it is optimal in some cases - in particular to be feasible - to use "robust" setpoints rather than the nomically optiomal setpoints:

  • M.S. Govatsmark and S. Skogestad, ``Selection of controlled variables and robust setpoints'', Ind.Eng.Chem.Res, 44 (7), 2207-2217 (2005).
  • 4. Another Local method: "maximum gain rule"

    A local analysis, and particular the "maximum gain rule" (maximize the minimum singular value) is more efficient numerically than the brute force approach. The best alternatives can then be analyzed in more detail using the "brute-force" method. The maximum gain rule is derived in the following paper, together with the "exact local method" and the use of optimal linear measurement combinations:

  • I.J. Halvorsen, S. Skogestad, J.C. Morud and V. Alstad, ``Optimal selection of controlled variables'', Ind. Eng. Chem. Res., 42 (14), 3273-3284 (2003).
  • The following summary of the maximum gain rule may be useful: Pages from corrected version of book (July 2007)

    The maximum gain rule has been applied to many examples. In particular, for scalar cases it is very simple (and efficient!) to use. For multivariable cases, maximizing the minimum singular value is usually OK, but it may fail for some ill-conditioned processes, like distillation:

  • E.S. Hori, S. Skogestad and M.A. Al-Arfaj, ``Self-optimizing control configurations for two-product distillation columns'', Proceedings Distillation and Apsorption 2006 London, UK, 4-6 Sept. 2006,

  • Software

  • You can find some software from my students on my home page. The most recent PhD thesis dealing with self-optimizing control was from Ramprasa Yelchuru (2012; developed MIQP-software to find optimal measurement subsets)
  • Victor Alves, Felipe Lima and Antonio Araujo at Campina Brande in Brazil have developed a Python package to extract optimal data from the Aspen Process Simulator.