Date of Award

Winter 2023

Project Type

Dissertation

Program or Major

Electrical and Computer Engineering

Degree Name

Doctor of Philosophy

First Advisor

Se Young Yoon

Second Advisor

Nicholas Kirsch

Third Advisor

Diliang Chen

Abstract

Equilibrium states represent the natural resting point of dynamic systems in their state-space and provide vital information for the design of stabilizing feedback controllers for these systems. Stabilizing a system at an equilibrium state is desirable as it requires a zero steady-state control signal, resulting in lower energy consumption by the actuator. Mathematical models of real systems usually contain uncertainties that make it difficult to determine the exact values of these equilibrium states. Therefore, control methods requiring precise knowledge of the equilibrium state are ineffective in controlling systems when uncertainties make determining the exact value of the equilibrium state challenging in the state-space.

The objective of the proposed dissertation research is to investigate robust and adaptive control methods to stabilize systems with uncertain equilibrium states. We consider the cases when the system model is uncertain and when it is completely unknown. In this dissertation, we first present a derivative feedback control scheme to stabilize the dynamic system and drive the system states to their true equilibrium state even when the location of such equilibrium is uncertain. We consider the case when the system model is uncertain and synthesize a robust controller by modeling the uncertainties as Lipschitz functions. Actuator constraints in the feedback control are also considered in this dissertation, and stability conditions are derived for the cases when the actuator output energy is bounded, and the actuator output is subject to saturation. The effectiveness of the proposed method is illustrated by a numerical example and experimentally demonstrated through a magnetic levitation test rig.

We then consider the derivative feedback control of systems with unknown dynamics. For unknown systems, reinforcement learning-based methods can be used to learn the optimal state feedback controller using state and input measurements. However, current control methods based on reinforcement learning have not been developed for systems whose equilibrium state is unknown. We use reinforcement learning-based methods to design a policy iteration algorithm that uses state derivatives in the utility function and learns the optimal state-derivative feedback controller. Convergence of the algorithm to the solution of the algebraic Riccati equation corresponding to the optimal state-derivative feedback controller is proven. The iterative algorithm is extended to the output-derivative feedback case by introducing two distinct state-parametrization schemes. These schemes enable the reconstruction of the state-derivative signal, leading to the development of two online iterative algorithms within a reinforcement learning framework. These algorithms differ in their measurement requirements, with the first algorithm relying on measurements of state derivatives, output derivatives, and inputs, while the second algorithm only requires measurements of the output derivatives and input signals. The proposed methods have the advantage of controlling systems without requiring knowledge of their equilibrium points. We demonstrate the effectiveness of our method through examples of systems with unknown dynamics and an unknown equilibrium point.

Finally, we consider the case when explicit state-derivative or output-derivative signals are not available from sensors. State or output-derivative signals can be obtained by differentiating the state or output signals obtained from sensors. As signals obtained from sensors are noisy, calculating their derivatives amplifies noise, which can negatively affect the convergence of the policy iteration algorithms. To account for this noise, we use reinforcement learning techniques to develop a policy iteration algorithm that estimates the optimal output-difference feedback controller for discrete-time stochastic systems. We prove the convergence of the algorithm under sensor and process noise. This method also avoids calculating state or output derivatives by using output difference for feedback control. The control scheme is numerically validated on an inverted pendulum system with an uncertain equilibrium point. We show that our method performs better compared to a standard output feedback reinforcement learning algorithm for stochastic systems.

Share

COinS