Cognitive Science Colloquium
All meetings take place on Thursdays, 3.30-5.30 pm in Bioscience Research Building 1103 (except April 23).
February 5 — Paul Pietroski (Linguistics & Philosophy, University of Maryland).
Title: Constructing Human Concepts: what I-meanings are good for.
Abstract: How are words related to the concepts that children lexicalize in the course of acquiring words? According to one simple and familiar picture, a verb like 'kick' is (among other things) an instruction to fetch the concept lexicalized with that verb. On this view, if the concept lexicalized is dyadic--say, KICK(x, y)--then the verb 'kick' is semantically dyadic, requiring two arguments like 'Brutus' and 'Caesar' to form a complete thought-expression, as in 'Brutus kicked Caesar'. This proposal faces a host of well-known difficulties, some of which I'll review. According to a less familiar picture that I want to explore, every word of a naturally acquirable human language is (among other things) an instruction to fetch a monadic concept--like KICK(e), a concept of events. As we'll see, this second view is not only defensible; it has a lot going for it, empirically and theoretically. But on this view, lexicalization is a partly creative process, in which formally new (and perhaps distinctively human) concepts are abstracted from prior concepts that humans may well share with other animals. This in turn invites a nonstandard but in many ways attractive conception of what makes human language distinctive: lexicalization--i.e., the capacity to create formally new analogs of prior concepts--may have been the important new twist. In particular, the basic operations of semantic composition--like conjunction of monadic concepts, and the introduction of a few "thematic" concepts like AGENT(e, x) and PATIENT(e, y)--may be computationally simple and widely available in nonhuman cognition. From this perspective, words do not merely label diverse concepts that can (somehow) be systematically and recursively combined, given suitably powerful combination operations. The idea is rather that words let us use prior concepts to create monadic concepts that can be systematically and recursively combined via natural computational operations. Put another way: in human languages, semantic composition may be dumb, while lexicalization is just clever enough to make very good use of dumb composition.
February 19 — Dan Sperber (Centre National de la Recherche Scientifique [CNRS], Paris).
Title: Epistemic Vigilance
Abstract: I present evolutionary arguments, linguistic-pragmatic considerations, and experimental evidence on the development of trust and mistrust to argue that there exists in humans a specific cognitive disposition, 'epistemic vigilance,' aimed at the risk of being misled by others. I suggest a reinterpretation of the standard false belief task and of reasoning biases in terms of epistemic vigilance.
March 5 — Karen Wynn (Psychology, Yale University).
Title: Early Understanding of the Social World: Social Evaluations and Strategies in Infants
Abstract: All social animals benefit from the capacities to identify conspecifics, differentially evaluate distinct individuals, and adopt appropriate interaction strategies with different social partners. Humans must be able to evaluate the actions, intentions and affiliations of the people around them, so as to make accurate decisions about who is friend and who may be foe, who is an appropriate social partner and who is not. We must also determine, for each individual we encounter, the optimal interactive styles and strategies to adopt. In this talk I will present evidence that even within the first 3 to 6 months of life, these capacities are vigorously operative in human infants. First, I'll describe research conducted in our lab asking what factors influence infants' social preferences. I'll present evidence that infants (a) prefer those who are psychologically similar to them over those who are not; and (b) take into account an individual's actions towards others in evaluating that individual: Across a range of social scenarios, infants prefer prosocial over antisocial individuals. Second, I'll describe research suggesting that infants evaluate certain categories of adult humans as more potentially threatening than others, and will discuss two adaptive strategies that infants employ in their own face-to-face interactions with these individuals.
March 26 — John Mikhail (Law, Georgetown University).
Title: Moral Grammar and Intuitive Jurisprudence: A Formal Model of Unconscious Moral and Legal Knowledge
Abstract: Could a computer be programmed to make moral judgments about cases of intentional harm and unreasonable risk that match those judgments people already make intuitively? If the human moral sense is an unconscious computational mechanism of some sort, as many cognitive scientists have suggested, then the answer should be yes. So too if the search for reflective equilibrium is a sound enterprise, since achieving this state of affairs requires demarcating a set of considered judgments, stating them as explanandum sentences, and formulating a set of algorithms from which they can be derived. The same is true for theories that emphasize the role of emotions or heuristics in moral cognition, since they ultimately depend on intuitive appraisals of the stimulus that accomplish essentially the same tasks. Drawing on deontic logic, action theory, moral philosophy, and the common law of crime and tort, particularly Terry's five-variable calculus of risk, I outline a formal model of moral grammar and intuitive jurisprudence along the foregoing lines, which defines the abstract properties of the relevant mapping and demonstrates their descriptive adequacy with respect to a range of common moral intuitions, which experimental studies have suggested may be universal or nearly so. Framing effects, protected values, and implications for the neuroscience of moral intuition are also discussed.
April 16 — Elizabeth Spelke (Psychology, Harvard University). NOTE THE ALTERED DATE
Title: Core Knowledge of the Social World
April 23 — Jeff Lidz (Linguistics, University of Maryland). NOTE: in BIO-PSYCH 1208
Title: The role of statistical distributions in a selective theory of language acquisition
Abstract: While research in the acquisition of syntax has largely focused on the necessity of abstract representations and the poverty of the stimulus with respect to these representations, very little research has asked how learners use the input to identify these representations. At the same time, research showing that infants are highly sensitive to the statistical structure of the input is often silent about the nature of the acquired representations. I present several experiments illustrating the role of statistical learning in a selective theory of syntax acquisition. I show (a) that infants can use statistical information to identify hierarchical phrase structure in an artificial grammar, (b) that the acquired representations allow for generalization to unobserved sentence structures, and (c) that statistical generalizations to be found in the input have consequences for morphosyntax that go beyond what can be inferred simply from the distributions. Hence, to the extent that learners use statistical information in learning syntax, they are doing so by comparing that information against the predictions of precise alternative syntactic representations.
May 7 — Laurie Santos (Psychology, Yale University).
Title: The Evolution of Irrationality: Insights from Non-Human Primates.
Abstract: My talk will explore the origins of our judgment and decision-making heuristics. Specifically, I will explore the possibility that some aspects of adult human irrational decision-making might be shared with non-human primates and human children. I will then attempt to use a comparative-developmental approach to directly address the origins of several classic human irrationalities, such as the anchoring bias, cognitive dissonance, loss aversion, and reference dependence. I will then discuss why such irrationalities may emerge so early in human development and evolution, with the hope of providing insight into the psychological machinery that drives both accurate and biased decision-making.