Adam L. Schwartz's Publications


Theory and Implementation of Numerical Methods Based on
Runge-Kutta Integration for Solving Optimal Control Problems,
A. Schwartz,
PhD Thesis, U.C. Berkeley, 1996.

Runge-Kutta Discretization of Optimal Control Problems,
A. Schwartz and E. Polak,
To appear in the Proceedings of the 10th IFAC Workshop on Control Applications of Optimization, 1996.

A Family of Projected Descent Methods
for Optimization Problems with Simple Bounds,
A. Schwartz and E. Polak,
Journal of Optimization Theory and Applications, 92(1), January 1997, pp. 1-32.

Consistent Approximations for Optimal Control Problems
Based on Runge-Kutta Integration,
A. Schwartz and E. Polak,
UC Berkeley UCB/ERL memo M94/21,
SIAM Journal of Control and Optimization, July 1996, pp. 1235-1269.

Design and Optimal Tuning of Nonlinear PI Compensators,
S. Sharuz and A. Schwartz,
Journal of Optimization Theory and Applications, Oct. 1994.

Design of Optimal Nonlinear PI Compensators,
S. Sharuz and A. Schwartz,
Proceedings of the 32nd IEEE Conference on Decision and Control,
Dec. 1993, pp. 3564-3565.

Design of High Performance Nonlinear PI Compensators,
S. Sharuz and A. Schwartz,
3rd international Workshop on Advanced Motion Control,
March 1993, pp. 970-979.

An Approximate Solution for Linear Boundary-Value Problems with Slowly-Varying Coefficients
S. Sharuz and A. Schwartz,
Journal of Applied Mathematics and Computation 60 (1994),
pp. 285-298.

An Approximate Solution for Homogeneous Boundary-Value Problems with Slowly-Varying Coefficients
S. Sharuz and A. Schwartz,
Journal of Computers and Mathematics with Applications, 28 (1994),
pp. 75-82.

Comments on Fuzzy Logic for Control of Roll and Moment for a Flexible Wing Aircraft
Adam L. Schwartz,
Control Systems Magazine,
Feb. 1992, pp. 61-62.

Comparison of Compensators for Double Integrator Plants,
A. Schwartz,
Master's Thesis,
MIT, 1989 (189 pages).

Theory and Implementation of Numerical Methods Based on
Runge-Kutta Integration for Solving Optimal Control Problems

A. Schwartz

This dissertation presents theory and implementations of numerical methods for accurately and efficiently solving optimal control problems. The methods we consider are based on solving a sequence of discrete-time optimal control problems obtained using explicit, fixed step-size Runge-Kutta integration and finite-dimensional B-spline control parameterizations to discretize the optimal control problem under consideration. Other discretization methods such as Euler's method, collocation techniques, or numerical implementations, using variable step-size numerical integration, of specialized optimal control algorithms are less accurate and efficient than discretization by explicit, fixed step-size Runge-Kutta for many problems. This work presents the first theoretical foundation for Runge-Kutta discretization. The theory provides conditions on the Runge-Kutta parameters that ensure that the discrete-time optimal control problems are consistent approximations to the original problem.

Additionally, we derive a number of results which help in the efficient numerical implementation of this theory. These include methods for refining the discretization mesh, formulas for computing estimates of integration errors and errors of numerical solutions obtained for optimal control problems, and a method for dealing with oscillations that arise in the numerical solution of singular optimal control problems. These results are of great practical importance in solving optimal control problems.

We also present, and prove convergence results for, a family of numerical optimization algorithm for solving a class of optimization problems that arise from the discretization of optimal control problems with control bounds. This family of algorithms is based upon a projection operator and a decomposition of search directions into two parts: one part for the unconstrained subspace and another for the constrained subspace. This decomposition allows the correct active constraint set to be rapidly identified and the rate of convergence properties associated with an appropriate unconstrained search direction, such as those produced by a limited memory quasi-Newton or conjugate-gradient method, to be realized for the constrained problem. The algorithm is extremely efficient and can readily solve problems involving thousands of decision variables. A manual for RIOTS is included.

The theory we have developed provides the foundation for our software package RIOTS. This is a group of programs and utilities, written mostly in C and designed as a toolbox for Matlab, that provides an interactive environment for solving a very broad class of optimal control problems. A manual describing the use and operation of RIOTS is included in this dissertation. We believe RIOTS to be one of the most accurate and efficient programs currently available for solving optimal control problems.


Runge-Kutta Discretization of Optimal Control Problems.

A. Schwartz and E. Polak

Runge-Kutta integration is used to construct finite-dimensional approximating problems that are consistent approximations, in the sense of Polak (1993), to an original optimal control problem. Stationary points and global solutions of these approximating discrete-time optimal control problems converge, as the discretization level is increased, to stationary points and global solutions of the original problem. The approximating problems involve finite-dimensional spaces of control coefficients. In solving the discrete-time approximating problems, a non-Euclidean inner product should be used on these coefficient spaces to avoid ill-conditioning. This result applies to any discretization method, not just Runge-Kutta integration. Significantly, not all Runge-Kutta methods (even full-order methods) lead to consistent approximations.


A Family of Projected Descent Methods for
Optimization Problems with Simple Bounds.

A. Schwartz and E. Polak

An algorithm based on a simple projection operator and an inexact line search is presented for solving large-scale minimization problem subject to simple bounds on the decision variables. A family of projected descent methods based on this algorithm is defined by conditions on the descent directions that allow global convergence to be established. This generalizes the results obtained by Bertsekas in [Ber.82]. If the problem satisfies standard second order sufficiency conditions, this algorithm has the important property that it identifies the solution's active constraint set in a finite number of iterations. This implies that the rate of convergence depends only on how the algorithm behaves in the unconstrained subspace. As a particular example, we present a modified version of the Polak-Ribiere conjugate gradient method that retains the usual properties associated with the conjugate gradient method applied to unconstrained problems.


Consistent Approximations for Optimal Control Problems
Based on Runge-Kutta Integration

A. Schwartz and E. Polak

This paper explores the use of Runge-Kutta integration methods in the construction of families of finite dimensional, consistent approximations to non-smooth, control and state constrained optimal control problems. Consistency is defined in terms of epiconvergence of the approximating problems and hypoconvergence of their optimality functions. A significant consequence of this concept of consistency is that stationary points and global solutions of the approximating discrete time optimal control problems can only converge to stationary points and global solutions of the original optimal control problem. The construction of consistent approximations requires the introduction of appropriate finite dimensional subspaces of the space of controls and the extension of the standard Runge-Kutta methods to piecewise continuous functions.

It is shown that in solving discrete time optimal control problems that result from Runge-Kutta integration, a non-Euclidean inner product and norm must be used on the control space to avoid potentially serious ill-conditioning effects.


Optimal Nonlinear PI Compensators

S.M. Shahruz and A.L. Schwartz

In this paper, linear time-invariant single-input single-output (SISO) systems that are stabilizable by (linear) proportional and integral (PI) compensators are considered. For such systems a five-parameter nonlinear compensator, to be called the nonlinear PI, is proposed. The parameters of the proposed nonlinear compensator are tuned by solving an optimization problem. The optimization problem always has a solution. Additionally, a general nonlinear PI-type compensator is proposed and is approximated by an easy-to-compute compensator, for instance, a six-parameter nonlinear compensator. The parameters of the approximate compensator are tuned to satisfy an optimality condition. To ensure the stability of the closed-loop system, a term is added to the cost function of the optimization problem. The added term incorporates a measure of the stability of the linearized closed-loop system in a neighborhood of the system equilibrium point. The superiority of the proposed nonlinear PI compensators over linear PI compensators is discussed and is demonstrated for two feedback systems. Finally, the potential of extending the design methodology proposed in the paper to the design of nonlinear proportional, integral, and derivative (PID) compensators for nonlinear unstable systems is shown in an example.


Approximate Solution for Boundary-Value Problems with
slowly-Varying Coefficients

S.M. Shahruz and A.L. Schwartz

In this paper, an approximate closed-form solution for linear boundary-value problems with slowly varying coefficient matrices is obtained. The derivation of the approximate solution is based on the freezing technique, which is commonly used in analyzing the stability of slowly varying initial-value problems as well as solving them. The error between the approximate and the exact solutions is given, and an upper bound on the norm of the error is obtained. This upper bound is proportional to the rate of change of the coefficient matrix of the boundary-value problem. The proposed approximate solution is obtained for a two-point boundary-value problem and is compared to its solution obtained numerically. Good agreement is observed between the approximate and the numerical solutions, when the rate of change of the coefficient matrix is small.


Comments on Fuzzy Logic for Control of Roll and Moment for a Flexible Wing Aircraft

Adam L. Schwartz

A recent article explores the use of fuzzy logic in certain control problems as an alternative to conventional control methodologies. In that article, an attempt is made to demonstrate the usefulness of a fuzzy control design by example. There are two notable features about this article that deserve a closer look. First, it is a simple matter to design a linear controller that appears to outperform the fuzzy logic controller presented in that article. Second, some of the claims made in the article about the capabilities of fuzzy logic are vague and unsubstantiated. It is this second problem that is a particularly troubling feature of many recent publication on fuzzy control.


Comparison of Compensators for Double Integrator Plants

Adam L. Schwartz

Modern control theory has become very popular because of its mathematical compactness and computational power. Unlike classical design methods, however, modern control paradigms often do not directly relate to the basic performance objectives of the servomechanism design problem. The goal of this thesis is to provide insight into the relationships that exist between classical and modern control methodologies, particularly the LQG design methodology. In this way, classical concepts can be incorporated into LQG designs. A case study of the double integrator plant model, along with the common design issues associated with it, provides the basis for the results in this thesis.