Cognitive Science Colloquium

Spring 2025

All meetings take place on Thursdays, 3.30-5.30 pm in HJ Patterson Building (HJP) room 2124, unless otherwise indicated.

Note: some links may require you to open a new tab.

February 6 Anna Papafragou (Psychology, Penn).

Title: How the human mind represents events
Abstract:
 Humans are surprisingly adept at interpreting what is happening around them, even from a single glance. Beginning at infancy, we are able to recognize dynamic events, the roles that various entities play in these events and the temporal and causal components that make up events. Furthermore, we use language to describe the events that we experience. But what, exactly, is an event? In this talk, I propose a theory of eventhood that combines insights from logico-philosophical analysis, cognitive psychology and linguistic theory, and thus places the nature of events at the heart of modern cognitive science. On this theory, the representational units of events in cognition rely on abstract underlying structure, including temporal boundaries. In that sense, events are similar to objects (since objects also involve abstract structure, including spatial boundaries). This proposal predicts systematic patterns in the way people spontaneously perceive unfolding events. It also explains otherwise mysterious similarities in how events and objects behave as cognitive entities. Finally, this proposal naturally accounts for the existence of a homology between the cognitive and linguistic structure of events. This framework opens up exciting possibilities for future research on how people represent, remember and talk about what happens.

February 20   Andrew Begel (Computer Science, Carnegie Mellon).  POSTPONED

 

Title: Facilitating Shared Awareness in Mixed Sensory Ability Collaborative Software Development.

Abstract: Remote software development increases the need for tightly-coupled synchronous collaboration due to the high coordination overhead required by existing tools and practices. For example, it is difficult to easily determine what your collaborator is looking at. This impedes a pair’s ability to communicate efficiently and collaborate effectively, compromising their agency and limiting their contribution. In a set of two studies, we explored novel technical solutions to ease the coordination burden felt between sighted remote pair programmers and between sighted and blind and visually impaired (BVI) development pairs. For the former, we designed a novel gaze visualization which shows where in the code their partner is currently looking, and changes color when they both look at the same reference. For the latter, we conveyed sighted partners’ navigation and edit actions to the BVI partner via sound effects and speech. We evaluated both designs with mixed sensory ability teams in within-subjects experiments focusing on code refactoring tasks. Our results show that both solutions streamline the dialogue required to refer to locations in the shared workspace, enabling partners to spend more time contributing to coding tasks. Both solutions enabled pairs to spend a greater proportion of time concurrently looking at the same code locations. Pairs communicated using a larger ratio of implicit to explicit linguistic references, and were faster and more successful at responding to those references. Our designs offer paths towards enabling remote and mixed sensory ability development teams to collaborate on more equal terms.

 
March 6 Rebecca Saxe (Psychology, MIT).
 
Title: What people learn from punishment

Abstract: In human society, punishment can sometimes teach and enforce social norms of behavior, but other times backfires and undermines the authority's legitimacy. These seemingly contradictory effects of punishment can only be understood by considering the cognitive processes in the minds of human observers of punishment. One challenge is that in real situations, participants bring strong priors about every element of a punitive setting. Our experiments therefore use vignettes about hypothetical societies to measure what adults and children learn from observing punishment, with experimental control over all of the priors. A formal cognitive model, derived from a standard model of how people make sense of one another’s actions (Inverse planning for Theory of Mind) precisely predicts people’s judgements. Our results show that polarized interpretations of punishment arise rationally. We also measured and modeled the effects of ideological authoritarianism on interpretations of punishment the model predicts that individual differences in authoritarianism may persist rationally and even deepen as people observe authorities using punishment. Our model illuminates a central tension faced by any authority, from university leaders to parents of toddlers: how the same punitive choice can communicate social norms to some people, yet cause loss of legitimacy in the eyes of others.

March 13   Tanya Luhrmann (Anthropology, Stanford).

Title: Voices
Abstract: Voices (auditory hallucinations) are experiences in which someone has a thought that they feel is not their own. This work draws on data from extensive fieldwork and from hundreds of interviews conducted across multiple countries to examine the prevalence and variability of these experiences. I will discuss what we know about the difference between the voices found in psychosis and in the general population, and the evidence that three factors (porosity, absorption and training) facilitate voices in the general population. Most fundamentally, I will argue that voices teach us something about consciousness more generally: that we have contradictory intuitions about our own thoughts which are elaborated or ignored by local culture, and that these intuitions facilitate this felt disavowal of thought.

April  10 Eldar Shafir (Psychology, Princeton).

Title:
Abstract:

April 24 Michael Inzlicht (Psychology, University of Toronto).

Title:
Abstract: