On the Cover In everyday learning, people routinely make successful generalizations from very limited evidence. Even young children can infer the meanings of words, the hidden properties of objects, or the existence of causal relations from just one or a few relevant observations, far outstripping the capabilities of conventional learning machines. How do they do it? And how can we bring machines closer to these human-like learning abilities? These are the questions that Josh Tenenbaum’s lab studies. The cover image illustrates one experiment conducted by Tenenbaum and graduate student Lauren Schmidt, testing people’s ability to learn words labeling object categories from very few examples. Subjects are first introduced to a world of unfamiliar but natural looking objects, and then shown several examples of objects that belong to a particular category: for instance, the three objects enclosed in boxes are examples of “tufas.” The task is to pick out all of the other tufas. Given just a few examples, people can confidently judge which other objects belong to this category. The generalization judgments of both adults and children in such tasks can be modeled quantitatively as Bayesian inferences over a tree-structured hypothesis space, with objects organized into a tree based on their similarity along relevant perceptual dimensions.