Cognitive Science Colloquium
Fall 2023
All meetings take place on Thursdays, 3.30-5.30 pm in HJ Patterson Building (HJP) room 2124, unless otherwise noted.
Sept 7 — Tomer Ullman (Psychology, Harvard).
Title:
The Physical Basis of Imagery and Imagination
Abstract: People
seem to have an early understanding of the world around them, and the people in
it. Before children can reliably say "ball", "wall", or
"Saul", they expect balls to not go through walls, and
for Saul to go right for a ball (if there's no wall).
There are different proposals out there for the cognitive computations
that underlie this basic commonsense reasoning. I'll focus on one proposal
in particular, and suggest that a "rough rendering and de-rendering"
approach can explain basic expectations about object solidity, cohesion, and
permanence. From there, I will expand the proposal to more recent work on
imagery and imagination, including non-commitment in imagery, and the
importance of physical properties in visual pretense.
Sept
21 — Nicole Holliday
(Linguistics & Cognitive Science, Pomona College).
Title: Sociolinguistic Challenges for Emerging Speech Technology
Abstract: As speech technology becomes an increasingly integral part of the everyday lives of humans around the world, issues related to language variation and change and algorithmic inequality will come to the forefront for citizens and researchers alike. Indeed, over the past few years, researchers across disciplines such as computer science, communications, and linguistics have begun to approach these concerns from a variety of scholarly perspectives. For sociolinguists who are primarily interested in how social factors influence language use and vice versa, the fact that humans and machines are regularly speaking with one another presents an entirely new area of research interest with major impacts for linguistics and the public. In this talk, I will present the results of recent and ongoing research related to how humans perceive the social qualities of synthesized voices (such as Siri), and how such perceptions may reinforce and reproduce stereotypical perceptions of human voices. I will also present research on how Automatic Speech Recognition systems designed to provide feedback (such as the Amazon Halo) demonstrate systematic bias against non-normative speakers, focusing on issues of racialized and gendered variation in voice quality. Finally, I will discuss large-scale challenges related to speech and algorithmic bias, as well as the pitfalls that language researchers need to be aware of when designing and evaluating new TTS and ASR systems.
Oct 5 — Dwight Kravitz (Cognitive Neuroscience, George Washington University)
[use: open link in new tab]
Title: Towards
Cognitive Neuroscience: Closing the loop between neurophysiology and human
behavior
Abstract:
A core goal of human cognitive
neuroscience must be the use of physiological observation to clarify the
mechanisms that shape behavior. At this point, we do not want for these
observations, with a wealth of data from animal models and
neuroimaging investigations in human providing a multimodal quantification
of the response evoked by a wide variety of stimuli and task contexts. The
consequences of these observations for our understanding of behavioral
mechanism are less clear. Here, we explore those consequences through testing a
novel set of behavioral predictions which arise from direct observations
of the physiology. Across a series of studies, the behavioral
implications of several aspects of neurophysiology will be explored: 1) the
co-localization of function, (e.g., visual working memory and perceptual
processing), 2) implicit learning in the context of distributed synaptic
plasticity, and 3) attention and inhibition in integrated circuits. These
results suggest that top-down processes directly and routinely recruit
processing within perceptual areas, altering their patterns of connectivity and
providing an explanation for the stereotypical pattern of selectivity seen for
even recently created visual categories.
Oct 19 — Chaz Firestone (Psychological and Brain Sciences, Johns Hopkins)
[use: open link in new tab]
Title: The Perception of Silence
Abstract: What do we hear? An intuitive and canonical answer is that we hear sounds — a friend’s voice, a clap of thunder, a minor chord, and so on. But can we also perceive the absence of sound? When we pause for a moment of silence, attend to the interval between thunderclaps, or sit with a piece of music that has ended, do we positively hear silence? Or do we simply fail to hear, and only know or judge that silence has occurred? Philosophers have long held that our encounter with silence is cognitive, not perceptual, hewing to the traditional view that sounds and their properties (e.g., pitch, loudness, timbre) are the only objects of auditory perception. However, the apparent prevalence of silence in ordinary experience has led some philosophers to challenge tradition, arguing for silence perception through thought experiments and new theoretical perspectives. Despite such theorizing, silence perception has not been subject to direct empirical investigation. Here, I present the first empirical studies of the hypothesis that silence is genuinely perceived. Across multiple case studies, I'll show (both through experimental results and also through subjectively appreciable demonstrations) that silence can 'substitute' for sound in illusions of auditory eventhood — and thus that silences can serve as the objects of auditory perception. This work also paves the way for new empirical approaches to absence perception more generally, with consequences for broader questions about the objects of perception, representations of negative properties, and other foundational issues at the intersection of the philosophy and psychology of perception.
Nov 2 — Hyo Gweon (Psychology, Stanford).
Title:
Thinking, Learning, and Communicating about the World and about the Self
Abstract: Humans are not
the only species that learns from others, but only humans learn and communicate
in rich, diverse social contexts. What makes human social learning so
distinctive, powerful, and smart? I will first introduce the idea that
human social learning is inferential at its core; even young children learn
from others by drawing rich inferences from others’ behaviors, and help others
learn by generating evidence tailored to others’ goals and knowledge. I will
then present more recent work that extends this idea to understand how young
children think, learn, and communicate about the self. Going beyond the idea
that young children are like scientists who explore and learn about the
external world, these results demonstrate how early-emerging social
intelligence supports thinking, learning, and communicating about the inner
world.
Nov 16 — Felipe de Brigard (Philosophy, Psychology & Neuroscience, Duke).
Title:
Episodic counterfactual thinking
and the emotional reappraisal of autobiographical memories
Abstract: Thinking about
alternative ways in which past personal events could have occurred is common
and ubiquitous. It has been suggested that these “episodic counterfactual
thoughts” serve a preparatory role: they help us to mentally simulate
alternative scenarios in order to hedge future uncertainty. In this talk, I
will offer a complementary view according to which episodic counterfactual
thoughts can also play a mnemonic role. Specifically, I will suggest that we
often employ them to emotionally reappraise autobiographical memories in order
to mollify our affective reactions toward past negative events. Possible
clinical implications of this approach will be discussed.