Date of Award

Spring 1998

Project Type

Dissertation

Program or Major

Engineering

Degree Name

Doctor of Philosophy

First Advisor

L Gordon Kraft

Abstract

Although the CMAC (Cerebellar Model Articulation Controller) neural network has been successfully used in control systems for many years, its property of local generalization, the availability of trained information for network responses at adjacent untrained locations, although responsible for the networks rapid learning and efficient implementation, results in network responses that is, when trained with sparse or widely spaced training data, spiky in nature even when the underlying function being learned is quite smooth. Since the derivative of such a network response can vary widely, the CMAC's usefulness for solving optimization problems as well as for certain other control system applications can be severely limited. This dissertation presents the CMAC algorithm in sufficient detail to explore its strengths and weaknesses. Its properties of information generalization and storage are discussed and comparisons are made with other neural network algorithms and with other adaptive control algorithms. A synopsis of the development of the fields of neural networks and adaptive control is included to lend historical perspective. A stability analysis of the CMAC algorithm for open-loop function learning is developed. This stability analysis casts the function learning problem as a unique implementation of the model reference structure and develops a Lyapunov function to prove convergence of the CMAC to the target model. A new CMAC learning rule is developed by treating the CMAC as a set of simultaneous equations in a constrained optimization problem and making appropriate choices for the weight penalty matrix in the cost equation. This dissertation then presents a new CMAC learning algorithm which has the property of "weight smoothing" to improve generalization, function approximation in partially trained networks and the partial derivatives of learned functions. This new learning algorithm is significant in that it derives from an optimum solution and demonstrates a dramatic performance improvement for function learning in the presence of widely spaced training data. Developed from a completely unique analytical direction, this algorithm represents a coupling and extension of single- and multi-resolution CMAC algorithms developed by other researchers. The insights derived from the analysis of the optimum solution and the resulting new learning rules are discussed and suggestions for future work are presented.

Share

COinS