Select Language

Robust Steady-State-Aware MPC for Resource-Constrained Systems with Disturbances

A novel robust Model Predictive Control framework combining steady-state awareness with tube-based design for systems with limited computational resources and external disturbances.
computepowercoin.com | PDF Size: 0.9 MB
Rating: 4.5/5
Your Rating
You have already rated this document
PDF Document Cover - Robust Steady-State-Aware MPC for Resource-Constrained Systems with Disturbances

1. Introduction

Model Predictive Control (MPC) is a powerful advanced control strategy renowned for its ability to handle multi-variable systems with constraints. However, its reliance on solving an optimization problem online at each time step creates a significant computational burden. This limitation is particularly acute for systems with constrained computational resources, such as embedded systems, drones, or edge computing devices. Traditional approaches to mitigate this—like shortening the prediction horizon—often compromise performance guarantees like steady-state convergence. The steady-state-aware MPC framework, introduced as a solution, ensures output tracking and convergence to a desired equilibrium without extra online computation. Yet, its critical flaw is a lack of robustness against external disturbances, a non-negotiable requirement for real-world deployment. This paper directly addresses this gap by integrating tube-based robust control techniques into the steady-state-aware MPC framework, creating a method that is both computationally efficient and disturbance-resilient.

2. Preliminaries & Problem Statement

The paper considers discrete-time linear time-invariant (LTI) systems subject to bounded additive disturbances and state/input constraints. The core problem is to design an MPC law that: 1) Operates with a short, fixed prediction horizon to limit online computation. 2) Guarantees constraint satisfaction at all times. 3) Ensures convergence to a desired steady-state. 4) Is robust to persistent, bounded external disturbances. The system is modeled as: $x_{k+1} = Ax_k + Bu_k + w_k$, where $x_k \in \mathbb{R}^n$, $u_k \in \mathbb{R}^m$, and $w_k \in \mathbb{W} \subset \mathbb{R}^n$ is a bounded disturbance. The sets $\mathbb{X}$ and $\mathbb{U}$ define state and input constraints, respectively.

3. Proposed Robust Steady-State-Aware MPC

3.1 Core Formulation

The proposed controller builds upon the nominal steady-state-aware MPC. The key is to parameterize the predicted state trajectory to inherently drive the system toward a feasible steady-state $(x_s, u_s)$. The online optimization problem is formulated to minimize a cost function over the short horizon while enforcing terminal constraints that link the final predicted state to this steady-state, ensuring long-horizon convergence properties despite the short prediction window.

3.2 Tube-Based Disturbance Handling

To introduce robustness, the authors employ a tube-based MPC strategy. The central idea is to decompose the control policy into two components: a nominal input calculated by solving the steady-state-aware MPC for a disturbance-free model, and a ancillary feedback law designed offline to keep the actual, disturbed state within a bounded "tube" around the nominal trajectory. This tube, often defined as a Robust Positively Invariant (RPI) set, guarantees that if the nominal state satisfies tightened constraints, the actual state will satisfy the original constraints despite disturbances. This elegant decoupling means the complex robust constraint handling is done offline, preserving the online computational simplicity of the nominal controller.

4. Theoretical Analysis

4.1 Recursive Feasibility

The paper provides a rigorous proof that if the optimization problem is feasible at the initial time step, it remains feasible for all future time steps under the action of the proposed control law and in the presence of bounded disturbances. This is a fundamental requirement for any practical MPC implementation.

4.2 Closed-Loop Stability

Using Lyapunov stability theory, the authors demonstrate that the closed-loop system is Input-to-State Stable (ISS) with respect to the disturbance. This means the system's state will ultimately converge to a bounded region around the desired steady-state, with the size of this region proportional to the bound on the disturbances.

5. Simulation Results

Numerical simulations on a benchmark system (e.g., a double integrator) are used to validate the controller's performance. Key metrics include: constraint violation (none observed), convergence error (bounded within the theoretical tube), and computation time per control step (significantly lower than a long-horizon robust MPC). The results visually demonstrate how the actual state trajectory remains within the computed tube around the nominal trajectory, even under persistent disturbances.

6. Experimental Validation on Parrot Bebop 2

The proposed method's practicality is tested on a Parrot Bebop 2 quadrotor drone, a platform with limited onboard processing power. The control objective is trajectory tracking (e.g., a figure-eight pattern) in the presence of simulated wind gusts (modeled as disturbances). The experimental data shows that the robust steady-state-aware MPC successfully maintains the drone close to the desired path with minimal deviation, while the onboard computer's CPU usage remains within acceptable limits, confirming the method's computational efficiency and real-world robustness.

7. Conclusion

The paper successfully presents a novel robust MPC framework that merges the computational benefits of steady-state-aware design with the robustness guarantees of tube-based MPC. It provides a viable solution for implementing high-performance, constraint-aware control on resource-constrained systems operating in uncertain environments, as proven by both theoretical analysis and hardware experiments.

8. Original Analysis & Expert Commentary

Core Insight: This paper isn't just another incremental MPC tweak; it's a strategic engineering compromise executed with surgical precision. The authors have identified the exact trade-off point between computational tractability and robust performance for embedded systems. They accept the limitation of a short prediction horizon—a major concession—but ingeniously recoup the lost guarantees (steady-state convergence, robustness) through clever offline design (tube sets, steady-state parameterization). This is control engineering as resource management.

Logical Flow: The argument is compelling and linear. Start with an unsolved problem (robustness gap in efficient MPC), select a theoretically sound tool (tube MPC) known for decoupling complexity, and integrate it seamlessly into an existing efficient framework (steady-state-aware MPC). The validation escalates logically from theory (proofs) to simulation (concepts) to experiment (reality on a drone), following the gold standard exemplified by seminal works like the original Tube MPC paper by Mayne et al. (2005) in Automatica.

Strengths & Flaws: The primary strength is practicality. By leveraging tube-based methods, the approach sidesteps the need for complex online min-max optimizations, which are computationally prohibitive. The use of a drone for validation is excellent—it's a relatable, resource-constrained platform. However, the flaw lies in the conservatism inherent to tube MPC. The offline computation of the RPI set and the subsequent constraint tightening can significantly shrink the feasible region of the controller, potentially limiting its agility. This is a well-known trade-off in robust control, as discussed in resources like the ETH Zurich's Automatic Control Laboratory lecture notes on constrained control. The paper could have quantified this performance loss more explicitly against a (computationally expensive) ideal robust MPC.

Actionable Insights: For practitioners: This is a ready-to-use blueprint for implementing robust MPC on edge devices. Focus on efficiently computing the RPI set—consider using polytopic or ellipsoidal approximations to balance complexity and conservatism. For researchers: The next frontier is adaptive or learning-based tubes. Can neural networks, akin to those used in model-based RL or inspired by works like Learning-based Model Predictive Control (IEEE CDC tutorials), learn tighter disturbance sets online, reducing conservatism while maintaining robustness? This would be the logical evolution of this work.

9. Technical Details & Mathematical Framework

The online optimization problem at time $k$ is: $$ \begin{aligned} \min_{\mathbf{u}_k, x_s, u_s} &\quad \sum_{i=0}^{N-1} \ell(\bar{x}_{i|k} - x_s, \bar{u}_{i|k} - u_s) + V_f(\bar{x}_{N|k} - x_s) \\ \text{s.t.} &\quad \bar{x}_{0|k} = \hat{x}_k, \\ &\quad \bar{x}_{i+1|k} = A \bar{x}_{i|k} + B \bar{u}_{i|k}, \\ &\quad \bar{x}_{i|k} \in \bar{\mathbb{X}} \subseteq \mathbb{X} \ominus \mathcal{Z}, \\ &\quad \bar{u}_{i|k} \in \bar{\mathbb{U}} \subseteq \mathbb{U} \ominus K\mathcal{Z}, \\ &\quad \bar{x}_{N|k} \in x_s \oplus \mathcal{X}_f, \\ &\quad (x_s, u_s) \in \mathcal{Z}_{ss}. \end{aligned} $$ Here, $\bar{x}, \bar{u}$ are nominal states/inputs, $N$ is the short horizon, $\ell$ and $V_f$ are stage and terminal costs. The critical elements are the tightened constraint sets $\bar{\mathbb{X}}, \bar{\mathbb{U}}$ (original sets shrunk by the RPI set $\mathcal{Z}$ via Pontryagin difference $\ominus$), and the ancillary law $u_k = \bar{u}_{0|k}^* + K(x_k - \bar{x}_{0|k}^*)$, where $K$ is a stabilizing gain. The set $\mathcal{Z}_{ss}$ defines feasible steady-states.

10. Analysis Framework: A Conceptual Case Study

Scenario: Autonomous delivery drone navigating an urban canyon (resource-constrained computer, wind disturbances).
Step 1 – Offline Design:

  1. Model & Disturbance Set: Identify linearized dynamics around hover. Characterize wind gusts as a bounded set $\mathbb{W}$ (e.g., ±2 m/s in horizontal plane).
  2. Compute RPI Tube: Design feedback gain $K$ (e.g., LQR) and compute the minimal RPI set $\mathcal{Z}$ for $e_{k+1} = (A+BK)e_k + w_k$. This defines the "error tube."
  3. Tighten Constraints: Shrink the drone's flight corridor (state constraints) and motor thrust limits (input constraints) by $\mathcal{Z}$ and $K\mathcal{Z}$ to get $\bar{\mathbb{X}}, \bar{\mathbb{U}}$.
  4. Define Steady-State Set: $\mathcal{Z}_{ss}$ contains all stationary hover points within the tightened corridor.
Step 2 – Online Operation: At each 10ms control cycle:
  1. Measure State: Get current drone position/velocity $x_k$ from sensors.
  2. Solve Nominal MPC: Solve the small QP (using $\bar{\mathbb{X}}, \bar{\mathbb{U}}, \mathcal{Z}_{ss}$) to get nominal plan $\bar{u}^*$ and target steady-state.
  3. Apply Composite Control: $u_k = \bar{u}^*_{0|k} + K(x_k - \bar{x}^*_{0|k})$. The first term guides the mission, the second term actively rejects wind gusts to keep the drone in the tube.
This framework guarantees safe flight (constraint satisfaction) and mission completion (steady-state convergence) despite winds, using only lightweight online computation.

11. Future Applications & Research Directions

  • Edge AI & IoT: Deploying advanced control on smart sensors, wearable devices, and micro-robots for precision tasks in manufacturing and healthcare.
  • Autonomous Swarms: Scalable control for large groups of cheap, simple drones or robots where each agent has severe computational limits.
  • Next-Generation Research:
    • Learning the Tube: Using real-time data to adaptively estimate the disturbance set $\mathbb{W}$ and shrink the tube, reducing conservatism. This merges with adaptive MPC and learning-based control paradigms.
    • Nonlinear Extensions: Applying the philosophy to nonlinear systems using concepts from nonlinear tube MPC or differential flatness, crucial for aggressive drone maneuvering.
    • Hardware-Software Co-design: Creating specialized embedded chips (FPGAs, ASICs) optimized to solve the specific, small QP of this framework at ultra-low power.

12. References

  1. Jafari Ozoumchelooei, H., & Hosseinzadeh, M. (2023). Robust Steady-State-Aware Model Predictive Control for Systems with Limited Computational Resources and External Disturbances. [Journal Name].
  2. Mayne, D. Q., Seron, M. M., & Raković, S. V. (2005). Robust model predictive control of constrained linear systems with bounded disturbances. Automatica, 41(2), 219-224.
  3. Rawlings, J. B., Mayne, D. Q., & Diehl, M. M. (2017). Model Predictive Control: Theory, Computation, and Design (2nd ed.). Nob Hill Publishing.
  4. ETH Zurich, Automatic Control Laboratory. (n.d.). Lecture Notes on Model Predictive Control. Retrieved from [Institute Website].
  5. Hewing, L., Wabersich, K. P., Menner, M., & Zeilinger, M. N. (2020). Learning-based model predictive control: Toward safe learning in control. Annual Review of Control, Robotics, and Autonomous Systems, 3, 269-296.