Cognitive Science Colloquium

Fall 2019

All meetings take place on Thursdays, 3.30-5.30 pm in Bioscience Research Building 1103, unless otherwise indicated.

September 12 — Alejandro Lleras (Psychology, Illinois).

Title: Parallel Peripheral Processing in Visual Search

Abstract: For the most part, cognitive psychologists have been interested in understanding capacity-limited processes in human cognition and for good reason. However, recent findings from our lab have demonstrated that there is much to learn about cognition and the brain by focusing on the temporal dynamics of unlimited capacity processes. In this talk, I will present our work on the characterization of one such type of processing: parallel peripheral processing and its role in goal directed search. Our mathematical and computational approach to this topic has allowed us to uncover new laws that govern visual search behavior, including the finding that "efficient" visual search performance is a logarithmic function of set size (not a linear one), that heterogeneous search performance can be predicted by homogeneous search performance, and, most recently, a law that states how color contrast and shape contrast combine to determine search efficiency when a visual target differs from the distractors in the display along both color and shape. 

 

September 26 — Anna Papafragou (Linguistics, Penn).

Title: Events in Language and Cognition

Abstract: A standard assumption within psycholinguistics is that the act of speaking begins with the preverbal, conceptual apprehension of an event or state of affairs that the speaker intends to talk about. Nevertheless, the way conceptual representations are formed prior to speaking is not well understood. In this talk I present results from a long-standing, interdisciplinary research program that addresses the nature of conceptual representations, their interface with linguistic semantics and pragmatics, and their role during language production in both children and adults. Focusing on the domain of events, I show that both the representational units of event cognition and the way they combine reveal sensitivity to abstract underlying structure that is often homologous to the structure of events in language. This abstract event structure can predict otherwise unexplained similarities in the way children and adults across language communities apprehend and process events in non-linguistic tasks. I also show that conceptualizing an event during speech planning further depends on both language-specific semantic factors and on language-general pragmatic assessments of the needs and knowledge of the speaker’s conversational partner. I conclude by sketching implications of this framework for future research on how preverbal thought is transformed into language.

 
October 17 — Nausicaa Pouscoulous (Linguistics, University College London).

Title: Children’s Understanding of Pragmatic Inferences
Abstract:  Human communication – pragmatic theories tell us – requires impressive inferential abilities and mind-reading skills (such as recognising communicative intentions and taking into account common ground). To learn how to speak and become competent communicators children need both. Yet, theories are divided concerning the breadth of mindreading skills in young communicators. Research is also divided on how good young children’s pragmatic abilities are. On the one hand, much evidence suggests pragmatics play a grounding role in the development of communication and language acquisition. On the other hand, linguistic pragmatic inferences such as metaphors and implicatures seem to develop later than other linguistic abilities. Indeed, some maintain that there are two separate systems for belief reasoning: a simpler one and a more sophisticated one that develops later (Apperly & Butterfill, 2009); along this line of reasoning we should also expect there to be two separate kinds of pragmatic abilities: an early set using (amongst other things) the simpler theory of mind system and a second set of pragmatic skills appearing later in childhood and using full-blown theory of mind abilities. I will argue that there is no need to divide pragmatic abilities in such a way to bridge the gap between pragmatic inferential skills found in toddlers and the difficulties with pragmatic phenomena observed in preschoolers. I will discuss evidence showing that phenomena such as metaphor and implicatures can be understood by much younger children than previously established and suggest that several factors – independently of children’s pragmatic abilities per se – may explain children’s apparent struggle with pragmatic inferences. There is an exception, nonetheless: irony. Irony comprehension is consistently found only after school age. I will finish by presenting an account explaining this discrepancy.

October 24 — Diane Brentari (Linguistics, University of Chicago).

Title: Limits and possibilities of modality’s effect on language: phonology through the lens of sign languages

Abstract: There have been numerous discussions about what communication modality bring to the task of building units in signed and spoken languages. In this talk I discuss three kinds of phonological phenomena, and ask to what extent they function identically in both types of language, and to what extent their realization is tailored to the communication modality. None of these examples are individual constraints or rules, but rather abstract mechanisms that describe how phonology coheres as a system.
        The first example is that of sonority. This is a clear case that exhibits a specific effect of modality effect. While both signed and spoken languages employ sonority in building syllables, I will argue that sign languages have no sonority sequencing principle for reasons of modality.  The second example concerns the organization of features in the phonological space, employing the principles of Dispersion and Feature Economy, and in addition to modality we will consider possible social effects—particularly size and type of community. Evidence from emerging sign languages shows that using this set of self-organizing principles is one of the first indices of a system becoming phonological, but the rate of emergence differs due to social circumstances. The third and final example comes from the idea of morphophonological packaging. It is quite commonly assumed there is more simultaneous morphology in sign than in speech, and more abundant sequential morphology in speech than in sign; however, I will argue that there are limits on simultaneous morphology in sign languages, and despite the articulatory possibilities, the concept of cognitive load may limit simultaneity. Examining how phonology functions in signed and spoken languages provides a unique window on what we consider to be phonological universals. 

 

November 14 — Justin Wood (Psychology, Indiana).

Title: Reverse Engineering the Origins of Intelligence
Abstract: One of the great unsolved mysteries in science concerns the origins of intelligence. What are the learning mechanisms in newborn brains? What role does experience play in the development of knowledge? To address these questions, my lab uses a two-pronged approach. First, we perform controlled-rearing experiments, using newborn chicks as a model system. We raise chicks in strictly controlled virtual worlds and record their behavior 24/7 as they learn to perceive and understand their environment. Fueled by interactive video game engines, we can explore how foundational abilities (e.g., object perception) emerge in newborn brains. Second, we perform parallel experiments on autonomous artificial agents, using virtual controlled-rearing chambers. We raise newborn animals and artificial agents in the same environments, and test whether they develop the same abilities when given the same experiences. The agents’ brains can be equipped with different biologically-inspired learning mechanisms (e.g., deep reinforcement learning, curiosity-driven learning), so by comparing the animals and agents, we can test which learning mechanisms are needed to model the development of intelligence. By linking psychology to artificial intelligence, we aim to reverse engineer the origins of intelligence and build machines that learn like newborn animals.

 

November 21 — Rick Lewis (Psychology & Linguistics, Michigan).

Title: Boundedly Rational Language, Choice and Action, and the Prospects for Theoretical Cognitive Science in the Age of Deep Learning
Abstract: Across the cognitive and behavioral sciences, a distinction is drawn between how we should choose or behave (according to a normative or rational analysis), and how we actually choose or behave (as observed in experiments, and as described by cognitive or neural mechanism theories). This talk presents models based on an alternative perspective that incorporates cognitive bounds into definitions of optimal decision and control, and that explains behavior as a rational adaptation to these bounds. The models offer novel explanations of phenomena in domains of choice, action, attention and language processing, including information-theoretic effects in eye-movements in reading, and choice phenomena that seem to be clear violations of rational decision theory (preference reversals). The kinds of rational explanations offered, and the structure of the models (grounded in state estimation and control and reinforcement learning), both point to a way of using deep reinforcement learning with potentially profound theoretical implications: deep RL can be used to derive cognitive faculties and aspects of mental architecture. To illustrate this potential, we will consider recent deep RL results in language emergence and optimal reward discovery.