This is the first book on the topic of spatial representation and crossmodal attention, providing a unique source of reference for this important topic in cognitive neuroscience The book is exceptionally broad in scope, offering perspectives from specialists in cognitive psychology, computational modelling, single cell neurophysiology, and neuroimaging, helping to bring together the scattered and diverse literature in this area.
Includes chapters from an international collection of authorities in this field, fully integrated and cross-referenced, to provide a coherent up-to-date overview of the field.
Many organisms possess multiple sensory systems, such as vision, hearing, touch, smell, and taste. The possession of such multiple ways of sensing the world offers many benefits. These benefits arise not only because each modality can sense different aspects of the environment, but also because different senses can respond jointly to the same external object or event, thus enriching the overall experience - for example, looking at an individual while listening to them speak. However, combining information from different senses also poses many challenges for the nervous system.
In recent years there has been dramatic progress in understanding how information from different sensory modalities gets integrated in order to construct useful representations of external space; and in how such multimodal representations constrain spatial attention. Such progress has involved numerous different disciplines, including neurophysiology, experimental psychology, neurological work with brain-damaged patients, neuroimaging studies, and computational modelling.
This volume brings together the leading researchers from all these approaches, to present the first integrative overview of this central topic in cognitive neuroscience.
Readership: Cognitive neuroscientists and cognitive psychologists; Neuroscientists; Philosophers of mind