Speaker: Jake Snell
Date: Feb 19, 11:45am-12:45pm Abstract: How can we build AI systems that produce more reliable outputs while consuming less data? In this talk, I will show how to answer this question by combining deep learning with tools from probabilistic modeling. First, I will show how to transfer the favorable generalization properties of probabilistic models to deep neural networks via metalearning. Second, I will demonstrate how probabilistic inference can enable safe deployment of black box AI models by producing rich probabilistic guarantees about their performance. I will conclude with future directions integrating these approaches to build deep neural networks with capabilities that are verifiable by design. Biographical Sketch: Jake Snell is a postdoctoral researcher at Princeton University working with Tom Griffiths. He earned his Ph.D. in Computer Science from the University of Toronto in 2021, advised by Richard Zemel. His research focuses on the intersection of deep learning and probabilistic modeling to build adaptable and reliable machine learning algorithms. He is a recipient of the Schmidt DataX Postdoctoral Fellowship and finalist for best student paper award at ICIP 2017. Location and Zoom link: Zoom only at https://fsu.zoom.us/j/97845031119 |