Learning from examples and decision-making in the presence of uncertainty are two key components of intelligent systems. Understanding the general mechanisms for achieving these goals is important for both building artificial systems and searching for biological implementations of these principles. In this talk, we present two recent unexpected findings in the study of learning and decision-making. First, we show that for certain forms of reinforcement learning, the problem of exploration can be completely decoupled from the problem of reward estimation. The finding supports the conjecture that the required exploration-exploitation computations are performed by two distinct brain regions. The second part of the talk focuses on the deceptively simple question: can a learning method generalize if it memorized the training data? Contrary to the common wisdom taught in machine learning and statistics courses, we show that a number of learning procedures (including kernel regression and over-parametrized neural networks) can generalize well in the memorization regime.