*******
Language, Cognition, and Computation Lecture Series *******
In everyday learning and reasoning,
people routinely draw successful
generalizations from very limited
evidence. Even young children can
infer
the meanings of words or the existence
of hidden biological properties or
causal relations from just one or a few
relevant observations -- far
outstripping the capabilities of
conventional learning machines. How
do
they do it?
I will argue that the success of people's everyday inductive
leaps can be understood as the product
of domain-general rational Bayesian
inferences constrained by people's
implicit theories of the structure of
specific domains. This talk will
explore the interactions between people's
domain theories and their everyday
inductive leaps in several different
task domains, such as generalizing
biological properties and learning word
meanings.
I will illustrate how domain theories generate the hypothesis
spaces necessary for Bayesian
generalization, and how these theories may
themselves be acquired as the products
of higher-order statistical
inferences.
I will also show how our approach to modeling human
learning motivates new machine learning
techniques for semi-supervised
learning: generalizing from very few
labeled examples with the aid of a
large sample of unlabeled data.
Bio:
Josh Tenenbaum received his B.S. in Physics from Yale University in 1993,
and his Ph.D. in Brain and Cognitive Sciences from MIT in 1999. After a short
stint at Stanford Psych, he is back home at MIT, where he is an assistant professor
in
BCS with a cross appointment in CSAIL.
*******************************************************************