Date of Award

Spring 1992

Project Type


Program or Major


Degree Name

Doctor of Philosophy

First Advisor

Michael J Carter


Artificial neural networks are networks of very simple processing elements based on an approximate model of a biological neuron. It is widely believed that because biological neural networks are tolerant of the loss of individual neurons and because there is a strong analogy between biological neural networks and artificial neural networks, then artificial neural networks must also be inherently fault tolerant. This is, unfortunately, simply not true.

Results reported in this dissertation show that in the task of function approximation the multilayer perceptron is very intolerant of faults to the extent that the loss of a single network parameter can ruin the learned approximation. A method for quantitatively evaluating network fault tolerance is proposed in this dissertation. This method is applicable to a large number of artificial neural network architectures. Using this method, it can be shown that the generalized radial basis function network is much more fault tolerant than the multilayer perceptron. Additionally, the generalized radial basis function network learns more rapidly than the multilayer perceptron.

These findings can be explained by using spectral methods. When the spectral content of a network's activation function is not similar to the spectral content of the function to be learned, then learning is made very difficult and the learned solution is very sensitive to the loss or alteration of network parameters.

Numerous methods for improving the fault tolerance of artificial neural networks are presented and discussed. Interestingly, experimental results show that a well chosen set of initial conditions can measurably improve fault tolerance. Also, training with random intermittent faults can significantly improve the fault tolerance of neural networks. In fact, the improvement in fault tolerance in the generalized radial basis function network was such that the loss of any single weight actually caused improvement or no change from the fault-free performance. Finally, this dissertation presents guidelines for intelligently choosing network parameters for good learning as well as improved fault tolerance.