Date of Award

Fall 2023

Project Type


Program or Major

Computer Science

Degree Name

Doctor of Philosophy

First Advisor

Momotaz MB Begum

Second Advisor

Laura LD Dietz

Third Advisor

Marek MP Petrik


This research is about learning high-level policies of multi-step sequential (MSS) tasks – such as activities of daily living – from demonstrations in a sample efficient manner. This research does not assume access to a simulator or an expert to provide more demonstrations. Learning a task policy in such a setting using state-of-the-art end-to-end approaches is sample inefficient due to a reliance on deep learning frameworks, which are known to require a large amount of data. Besides that, most imitation learning frameworks in robotics assume that a domain expert’s demonstration always contains a correct way of doing the task. Despite its theoretical convenience, this assumption has limited practical value in real-world imitation learning. There are many reasons for an expert in the real world to provide demonstrations that may contain incorrect or potentially unsafe ways of doing a task. To that end, my work proposes a novel behavior cloning framework for imitation learning that can autonomously detect and remove incorrect demonstrations while learning the task policy. The proposed framework, which we term Robust Maximum Entropy behavior cloning(R-MaxEnt-BC), learns a stochastic model that maps states to actions. In doing so, R-MaxEnt-BC solves a min-max problem that leverages the entropy of the model to assign weights to different demonstrations while assigning low weights to incorrect ones. Our empirical results show that R-MaxEnt-BC outperforms the existing imitation learning approaches in real and simulated robotics tasks.