Cognitive Science Colloquium

Fall 2013

All meetings take place on Thursdays, 3.30-5.30 pm in Bioscience Research Building 1103, unless otherwise indicated.

 

September 12 — Andrei Cimpian (Psychology, Illinois).

Title: Introducing the Inherence Heuristic
Abstract: In this talk, I will first introduce the proposal that human reasoning relies on an inherence heuristic, an implicit cognitive process that leads people to explain the patterns observed in the world in terms of the inherent features of their constituents. I will then provide evidence for this proposal, evidence that suggests the inherence heuristic is an automatic process that exerts a ubiquitous influence on how we make sense of the world. Its influence is detectable even in the first few years of life, as indicated by the developmental studies I will present. In the second part of the talk, I will argue that the inherence heuristic may be at the root of several other phenomena of great interest to cognitive and social scientists. In particular, I will highlight, and provide evidence for, the links between the inherence heuristic and (1) psychological essentialism (the common belief that natural and social categories are underlain by hidden, causally powerful “essences”) and (2) system justification (the tendency to believe that one’s sociopolitical system is fair, natural, and legitimate). In sum, this talk will illuminate a cognitive process that emerges early in life and has profound effects on many aspects of human psychology.

September 26 — Sharon Thompson-Schill (Psychology, Penn).

Title: Costs and benefits of cognitive control for language processing.
Abstract: There is no doubt that cognitive control and language processing are intertwined:  Prefrontal cortical regions that support the ability to resolve competition between multiple, incompatible representations are recruited for both language production and language comprehension. In this talk, I will explore a somewhat less intuitive hypothesis, namely that cognitive control has both benefits and costs for language processing. After introducing the motivation for this hypothesis, I will provide evidence from three experiments in which we manipulated frontally-mediated cognitive control processes using noninvasive brain stimulation (transcranial direct current stimulation; TDCS) and observed the consequences for different aspects of language processing. I will present results from one experiment that shows a benefit of cognitive control (a categorization task), a second that shows a cost of cognitive control (a different categorization task), and a third that shows both costs and benefits (a word production task).

October 10 — Scott Johnson (Psychology, UCLA).

Title: Constraints on Statistical Learning in Infancy

Abstract:  Statistical learning is the process of identifying patterns of probabilistic co-occurrence among stimulus features, essential to our ability to perceive the world as predictable and stable. Research on auditory statistical learning has revealed that infants use statistical properties of linguistic input to discover structure, including sound patterns, words, and the beginnings of grammar, that may facilitate language acquisition. Previous research on visual statistical learning revealed abilities to discriminate probabilities in visual patterns, leading to claims of a domain-general learning device that is available early in life, perhaps at birth.  More recent research, however, challenges this view.  Visual statistical learning appears to be constrained by limits in infants' attention and memory, raising the possibility that statistical learning, like rule learning, may be best characterized as domain-specific.  Implications for theories of cognitive development will be discussed.

October 24  — Tania Lombrozo (Psychology, Berkeley).

Title: Explanation: The Good, the Bad, and the Beautiful
Abstract: Children and adults are often motivated to explain the world around them and have strong intuitions about what makes something a good (or beautiful) explanation. Why are we so driven to explain, and what accounts for our explanatory preferences? In this talk I’ll present evidence that both children and adults prefer explanations that are simple and have broad scope, consistent with many accounts of explanation from philosophy of science. The good news is that a preference for simple and broad explanations can sometimes improve learning; The bad news is that under some conditions, a preference for simplicity can lead people to systematically misremember observations, and a preference for broad scope can encourage errors of overgeneralization. An important take-home lesson is that seeking, generating, and evaluating explanations plays an important role in human judgment and serves as a valuable window onto core cognitive processes such as learning and inference.

November 7 — Fei Xu (Psychology, Berkeley).

Title: Towards a Rational Constructivist Approach to Cognitive Development
Abstract: The study of cognitive development has often been framed in terms of the nativist/empiricist debate.  Here I present a new approach to cognitive development – rational constructivism. I will argue that learners take into account both prior knowledge and biases (learned or unlearned) as well as statistical information in the input; prior knowledge and statistical information are combined in a rational manner (as is often captured in Bayesian models of cognition).  Furthermore, there may be a set of domain-general learning mechanisms that give rise to domain-specific knowledge.  I will present evidence supporting the idea that early learning is rational, statistical, and inferential, and infants and young children are rational, constructivist learners.

November 13, 14, 15 — The Baggett Lectures, Department of Linguistics — Stanislas  Dehaene (Psychology, College de France, Paris).

 

November 21 — Gary Dell (Psychology & Linguistics, University of Illinois). POSTPONED TO FEBRUARY 13, 2014.

Title: What Freud got Right About Speech Errors
Abstract: Most people associate Sigmund Freud with the assertion that speech errors reveal repressed thoughts, a claim that does not have a great deal of support. I will introduce some other things that Freud said about slips, showing that these, in contrast to the repression notion, do fit well with modern theories of language production. I will illustrate using an interactive two-step theory of lexical access during production, which has been used to understand aphasic speech error patterns.

December 5 — Marina Bedny (Psychological and Brain Science, Johns Hopkins).

Title: Nature & Nurture in Human Cognition: Evidence from Studies of Blindness.
Abstract: How do genes and experience interact to produce human cognition? I will discuss insights into this puzzle from studies of blindness. The first half of the talk will focus on how first-person sensory experience contributes to concepts. What do congenitally blind people know about seeing and light? One source of evidence comes from studies of “visual” verbs. Congenitally blind and sighted people made semantic similarity judgments on pairs of visual verbs (e.g. to glimpse) and non-visual verbs (e.g. to touch). We find that blind adults distinguish seeing from perception through other sensory modalities (e.g. to touch) and from amodal knowledge acquisition (e.g. to notice). Like sighted individuals, they make fine grained spatiotemporal distinctions among verbs of seeing (e.g. to peek vs. to stare). Blind adults also distinguish among verbs of light emission along dimensions of intensity (glow vs. blaze) and temporal continuity (blaze vs. flash). This knowledge about seeing is not limited to the meanings of words. Blind people make inferences about how others feel based on visual experience and these inferences depend on the same neural mechanisms as in sighted individuals. Together these data suggest that first-person sensory experience is not required to develop rich conceptual representations. The second half of the talk will focus on effects of experience on the neurobiology of language. Language processing typically relies on fronto-temporal cortices. I argue that “visual” areas of the occipital cortex are added to the language system in congenitally blind individuals. Language-related plasticity occurs during development: plasticity is observed in congenitally, but not late blind adults and emerges in blind children by 4-years-of-age. These findings suggest that brain regions that did not evolve for language can nevertheless acquire language processing capacities. These studies suggest that during development brain regions acquire cognitive functions through a constrained process of self-organization.