Date of Award

Winter 2010

Project Type


Program or Major

Electrical Engineering

Degree Name

Master of Science

First Advisor

Andrew L Kun


Tagging of media, particularly digital photographs, has become a very popular and efficient means of organizing material on the internet and on personal computers. Tagging, though, is normally accomplished long after the images have been captured, possibly at the expense of in-the-moment information. Although some digital cameras have begun to automatically populate the various fields of a photograph's metadata, these generic labels often lack in the descriptiveness presented through user-observed annotations and therefore stress the necessity of a user-driven input method. However, most mobile annotation applications demand a great number of keystrokes in order for users to tag photographs and thereby focus the user's attention inward. Specifically, the problem is that these applications require users to take their eyes off the environment while typing in tags. We hypothesize that we can shift the user's focus away from the mobile device and back to their environment by creating a mobile annotation application which accepts voice commands. In other words, our major hypothesis is that a convenient way of tagging digital photographs is by using voice commands.