Sufficient Conditions for the Convergence of Galerkin Approximations to the Hamilton-Jacobi Equation

Abstract: 

If u is a stabilizing control for a nonlinear system that is affine in the control variable, then the solution to the Generalized Hamilton-Jacobi-Bellman (GHJB) equation associated with u is a Lyapunov function for the system and equals the cost associated with u. If an explicit solution to the GHJB equation can be found then it can be used to construct a feedback control law that improves the performance of u. Repeating this process leads to a successive approximation algorithm that uniformly approximates the Hamilton-Jacobi-Bellman equation. The difficulty is that it is very difficult to construct solutions to the GHJB equation such that the control derived from its solution is in feedback form. This paper shows that Galerkin's approximation method can be used to construct arbitrarily close approximations to the GHJB equation while generating stable feedback control laws. We state sufficient conditions for the convergence of Galerkin approximations to the GHJB equation. The sufficient conditions derived in this paper include standard completeness assumptions and the asymptotic stability of the associated vector field. The method is demonstrated on a simple nonlinear system and is compared to a result obtained by using exact feedback linearization in conjunction with the LQR design method.

Reference:
R. Beard, G.N. Saridis, J.T. Wen (1997). Sufficient Conditions for the Convergence of Galerkin Approximations to the Hamilton-Jacobi Equation.

Automatica, 33(12), December, 1997, pp.2159-2177.

Publication Type: 
Archival Journals