Despite being reflexive, primate view invariant object recognition is a complex computational task. These computations are thought to reside in the ventral visual stream, specifically culminating in inferior temporal (IT) cortex. Recent research in machine learning has made great progress in modeling primate ventral visual stream computations. While the end result of current machine learning approaches produces models that are highly predictive of the adult state of the ventral stream, the learning approaches themselves are not biologically plausible, requiring tens of thousands to millions of human-labeled training points. Understanding primate visual development is therefore not only interesting from the perspective of neuroscience, but also has practical value in building more robust learning algorithms capable of functioning in domains where large amounts of human-labeled training information may be difficult or impossible to create. Better learning algorithms may also produce agents capable of adapting and behaving in the world not unlike humans. This thesis first describes work on predicting visual responses across the human ventral stream using convolutional neural networks (CNNs). We then describe a set of natural image statistics automatically incorporated into high-performing CNNs from supervised training—it is possible primate development incorporates these or similar natural image statistics into its synaptic strengths. Finally, we describe the first-large scale characterization of IT in 19-32 week old macaques. While we find longer response latencies in these younger animals, we do not find any differences in representation between adults and juveniles suggesting that, at 19-32 weeks of age, IT already supports robust object recognition consistent with adults. Our data provide an upper limit on the amount of training data needed to reach adult-level performance—approximately 2,800 hours of waking visual experience.