Syllabus

Course Meeting Times

Lectures: 2 sessions / week, 1.5 hours / session

Overview

This course is an introduction to computational theories of human cognition. Drawing on formal models from classic and contemporary artificial intelligence, we will explore fundamental issues in human knowledge representation, inductive learning and reasoning. What are the forms that our knowledge of the world takes? What are the inductive principles that allow us to acquire new knowledge from the interaction of prior knowledge with observed data? What kinds of data must be available to human learners, and what kinds of innate knowledge (if any) must they have? Class sessions will comprise a mixture of lectures and discussion. Readings will include seminal and state-of-the-art research papers from the cognitive, AI, and machine learning literatures, as well as textbook chapters and tutorials on technical approaches. Assignments will consist of several problem sets and a final modeling project or paper.

We will cover a range of formal modeling approaches and their applications to understanding core areas of cognition. Cognitive science topics will include:

  • Concept Learning and Categorization
  • Reasoning about Natural Kinds
  • Learning Causal Relations
  • The Structure and Formation of Intuitive Theories of Physical, Biological and Social Systems
  • The Acquisition of Natural Language (syntax and semantics)
  • Theory of Mind: How we Understand the Behavior and Mental States of Other People

Formal modeling topics will include:

  • Bayesian Inference and Hierarchical Bayesian Models
  • Frameworks for Knowledge Representation: First-order Logic, Formal Grammars, Associative Networks, Taxonomic Hierarchies, Relational Schemas
  • Probabilistic and Causal Graphical Models
  • Relational Probabilistic Models
  • Controlling Complexity: Minimum Description Length, Bayesian Occam's Razor, Nonparametric Bayesian Models
  • Inductive Logic Programming
  • Sampling Algorithms for Inference in Complex Probabilistic Models

The syllabus will balance presentations of state-of-the-art material with a broad historical perspective. Depth of presentation will vary across topics, from brief overviews in some areas to more technical and detailed coverage in others.

Prerequisites

The pre-requisite is a class in probability or statistics (e.g., 9.07, Statistical Methods in Brain and Cognitive Science, 18.05, Introduction to Probability and Statistics, 6.041, Probabilistic Systems Analysis and Applied Probability). A class in artificial intelligence or machine learning would be helpful but is not necessary, as the relevant material will be reviewed in this class. Experience in programming (particularly in a high-level language such as MATLAB®) is desirable.

Readings

There is no single required text for this class. Buy at Amazon Russell, Stuart J., and Peter Norvig.  Artificial Intelligence: A Modern Approach. 2nd ed. Upper Saddle River, N.J.: Prentice Hall/Pearson Education, 2003. ISBN: 0137903952, is strongly recommended as background reading on relevant formal models. Readings will consist of papers from the cognitive literature and background material from AIMA and several texts and tutorials in machine learning.

Discussion Board

Short (≈ 1 paragraph) responses to the readings or assigned questions are due by 10 am on the day of class. You should post these directly to the MIT server discussion board.

You must submit 20 notes for full credit, with no more than two posts counting in any one week. These can include short responses to other peoples' posts, as long as the responses are thoughtful and in some way address the assigned readings and questions.

Assignments

  • Four Problem Sets (involving minimal programming in MATLAB® or another high-level programming language)
  • A Final Project or Paper on Cognitive Modeling
    • For Graduate Students: At the level of a short conference paper
    • For Undergraduates: Critically discuss, or implement and extend an existing
      model
  • Participation in Class and on the Web-based Discussion Board

Grading

The grade distribution is approximately 40% problem sets, 40% project, and 20% discussion/participation.