Date of Award

Spring 2009

Project Type


Program or Major


Degree Name

Doctor of Philosophy

First Advisor

John Limber


Complex objects have been found to take up more visual working memory---as measured by lowered change-detection accuracy with such stimuli---than simple colored shapes (Treisman, 2006; Xu, 2002). While verbal working memory studies have similarly shown reduced apparent capacity for longer words (Baddeley, 2007), other research has demonstrated that features contributing to object categorization and recognizability can help visual working memory capacity (Olsson & Poom, 2005; Alvarez & Cavanagh, 2004). Until very recently, no measures of crossmodal working memory capacity had been proposed, even though crossmodal associations are part of the fabric of learning, from classical conditioning to calculus. The working memory load of a range of complex crossmodal (visual--auditory) objects was measured here in a sequence of experiments adapting classic visual change detection procedures (Vogel et al., 2001). The adapted method involves rapid sequential presentation of objects, each comprising a sound and an image, with a test object appearing after a 1-second delay. Application of this method shed light on the working memory impact of two sources of complexity, featural detail and object meaningfulness. Displaying the test object in a previously unused location---in this case, the center of the screen---resulted in lower change-detection performance compared to placement in its original location. Test location interacted with the role of different image types (gray and colored shapes, drawings, and photos). Image type showed no consistent pattern of influence on working memory capacity when test objects appeared in their original locations; when shown in an alternate location, crossmodal associations involving more-detailed images were more accurately recalled. Independent of test location, more-complex animal sounds provided better crossmodal change detection performance than abstract tones. An association measure showed consistently higher numbers of associations for representational images than abstract ones. Observers' response bias was lower for meaningful images, but their change-detection accuracy did not differ by image meaningfulness. The results obtained with this novel crossmodal working memory measure demonstrate that perceptual detail contributes to effective crossmodal working memory capacity for sounds and for abstract and realistic images.