Bayesian inference is ubiquitous in models and tools across cognitive science and neuroscience. While the mathematical formulation of Bayesian models in terms of prior and likelihood is simple, exact Bayesian inference is intractable for most models of interest. In this tutorial, we will cover a range of approximate inference methods, including sampling-based methods (e.g. MCMC, particle filters) and variational inference, and describe how neural networks can be used to speed up these methods. We will also introduce probabilistic programming languages, which provide tools for black-box Bayesian inference in complex models. Hands-on exercises include implementing inference algorithms for simple models and/or implementing complex models in a probabilistic programming language.