Cognitive Science Colloquium
All meetings take place on Thursdays, 3.30-5.30 pm in Bioscience Research Building 1103, unless otherwise indicated.
September 24 — Susana Martinez-Conde (Neuroscience, SUNY Downstate Medical Center).
Title: From exploration to fixation: how eye movements determine what we see
Abstract: Vision depends on motion: we see things either because they move or because our eyes do. What may be more surprising is that large and miniature eye motions help us examine the world in similar ways - largely at the same time. In this presentation, I will discuss recent research from my lab and others suggesting that exploration and gaze-fixation are not all that different processes in the brain. Our eyes scan visual scenes with a same general strategy whether the images are huge or tiny, or even when we try to fix our gaze. These findings indicate that exploration and fixation are not fundamentally different behaviors, but rather two ends of the same visual scanning continuum. They also imply that the same brain systems control our eye movements when we explore and when we fixate - an insight that may ultimately offer clues to understanding both normal oculomotor function in the healthy brain, and oculomotor dysfunction in neurological disease.
October 15 — Erin Hannon (Psychology, University of Nevada, Las Vagas).
Title: Learning to listen: Development of music-specific melody and rhythm perception
Abstract: Music and language share important features, however these shared features have distinct functions in each domain. For example, although melody is a fundamental organizing structure in a song, it is far less important than the lexical and syntactic structure of a spoken sentence. From an early age, listeners are exposed to both music and language, and they must eventually acquire specific knowledge about the rules that govern sound structure in each domain. My research program examines the music-specific perceptual and cognitive processes that characterize music-specific melody and rhythm processing by experienced adult listeners, and compares these abilities with those of younger listeners and listeners with contrasting cultural or musical backgrounds. Part of acquiring musical and linguistic knowledge may include learning to differentially weigh acoustic features depending on the musical or linguistic context.
October 22 — Cristine Legare (Psychology, University of Texas at Austin).
Title: The ontogeny of cultural learning
Abstract: Humans are a social species and much of what we know we learn from others. To be effective and efficient learners, children must be selective about when to innovate, when to imitate, and to what degree. In a systematic program of interdisciplinary, mixed-methodological, and cross-cultural research, my objective is to develop an ontological account of how children flexibly use imitation and innovation as dual engines of cultural learning. Imitation is multifunctional; it is used to learn both instrumental skills and cultural conventions such as rituals. I propose that the psychological system supporting the acquisition of instrumental skills and cultural conventions is driven by two modes of interpretation: an instrumental stance (i.e., interpretation based on physical causation) and a ritual stance (i.e., interpretation based on social convention). What distinguishes instrumental from conventional practices often cannot be determined directly from the action alone but requires interpretation by the learner based on social cues and contextual information. I will present evidence for the kinds of information children use to guide flexible imitation. I will also discuss cross-cultural research in the U.S. and Vanuatu (a Melanesian archipelago) on the interplay of imitation and innovation in early childhood.
October 29 — Thomas Bever (Cognitive Science, University of Arizona).
Title: Laws of form in Perception: Aesthetic theory, the Golden Ratio and Depth Perception
Abstract: Recent investigations in language and cognition have revived the notions of the role of natural formal laws in cognition and language. In this talk, I discuss the impact of the golden ratio in aesthetic preferences, and its implications for the perception of depth. The golden ratio – as the limit of the Fibonacci series – appears throughout nature, in both biological and physical systems. The two Aristotelian aesthetic theories can explain the preference for the golden ratio frame – it exhibits both optimal complexity in initial stages of vision by virtue of its recursiveness, and creating-and-resolving of representational conflicts. I first review some classic aesthetic theories, conflict resolution and optimal complexity, and briefly review the arguments that such processes may play a critical role in language acquisition. I then show how both principles apply to explain the historically attested preference for the golden section. I then show how these theories predict that the golden ratio frame for paintings and scenes should enhance the illusory perception of depth within them. I then show that various experimental studies confirm those predictions. The result is an unexpected verification both of the general theory of aesthetics and its application to the golden section: equally unexpected from prior theories is the emergent discovery that the golden ratio frame enhances depth perception. Finally, I discuss some new studies in collaboration with several artists, creating new art works designed to explore the interaction of frame shape with different kinds of scenes.
November 19 — Jeff Lidz (Linguistics, Maryland); DISTINGUISHED SCHOLAR-TEACHER lecture. Time: 4 pm. Room: Chemistry 1402.
Title: Are Children Human? – The View
from Language Acquisition.
Abstract: Wherever we find communities of human beings, we also find language. Moreover, cats, dogs and houseplants, despite living in the very same environment, all fail to display linguistic behavior. These basic observations suggest that language is unique to and definitional of our species. However, there is one population of ostensibly human creatures that is curiously silent when it comes to language, namely human infants. Might this mean that this distinctively human characteristic is absent from this population and hence that we shouldn’t think of children as human until they have acquired a language? In this talk, I discuss specific features of the human capacity for language and identify ways in which linguistic structure comes from the human mind. I further show that this structure plays a causal role in language acquisition throughout development and hence provides the basis of our humanity at all stages of life.
December 3 — Dan Jurafsky (Linguistics, Stanford University).
Title: The computational linguistics of
food, innovation, and community
Abstract: How do language and ideas propagate through communities? We use computational linguistics to extract social meaning from language to help understand this crucial link between individual cognition and social groups. I'll discuss the way economic, social, and psychological variables are reflected in the language we use to talk about food. I'll introduce the "ketchup theory of innovation" on the crucial role that interdisciplinarity plays in the history of innovation and how it can be discovered via language. Finally I'll show how computational methods can address the mystery of why linguistic innovation changes sharply across people's lifespan.
December 10 — Cindy Moss (Psychological and Brain Sciences, Johns Hopkins University).
Title: Scene representation
by echolocation in bats
Abstract: Bat echolocation is an active and adaptive system: its success depends upon a tight coupling between the animal’s actions and perception. Bats produce ultrasonic vocalizations and extract 3-D spatial information from the environment by processing echoes from objects in the path of the sound beam. In cluttered environments, each sonar vocalization results in a cascade of echoes from objects distributed in direction and distance, which the bat must perceptually organize to represent the positions and features of obstacles and prey. Bats encounter an additional challenge when they forage in the presence of conspecifics, namely sorting echo returns from their own signals from those produced by neighboring bats. Representing complex echo scenes demands high-resolution acoustic signal processing, which is enhanced by the bat’s adaptive control over the spectral, temporal and directional features of sonar calls.