|Abstract: The upcoming generation of cosmological surveys will aim to map the Universe in great detail and on an unprecedented scale, but involves new and outstanding challenges at all levels of the scientific analysis, from pixel level data reduction to cosmological inference.
As powerful as Deep Learning has proven to be in recent years, in most cases a DL approach alone proves to be insufficient to meet these challenges, and is typically plagued by issues including robustness to covariate shifts, interpretability, and proper uncertainty quantification, impeding their exploitation in a scientific analysis.
In this talk, I will instead advocate for a unified approach merging the robustness and interpretability of physical models, the proper uncertainty quantification provided by a Bayesian framework, and the inference methodologies and computational frameworks brought about by the Deep Learning revolution.
I will in particular illustrate how Neural Density Estimation methods can open new ways to leverage numerical simulations for Bayesian inference, replacing the need for analytic models in a number of applications ranging from infering intermediate cosmological fields like weak-lensing convergence maps, or all the way to cosmological parameters. On another front, I will present our efforts to develop tools for automatically differentiable physical models, from analytic cosmological observables to N-body simulations, opening the door to a range of novel and efficient gradient-based inference techniques, and allowing for fast hybrid physical/ml models.|
Link to the Event Video