[edit]
Program Schedule
All times are CEST. You can check current CEST time here.
Please also see this link for an interactive calendar containing the schedule.
Registration Desk
Registration desk is open on:
- 7:30 - 17:00 on Thu, May 2nd
- 7:00 - 17:00 on Fri, May 3rd and Sat, May 4th
Schedule for Day 1: Thursday, May 2
Time (CEST) | Day 1: Thursday, May 2 |
---|---|
08:45-09:00 | Opening remarks |
09:00-10:00 |
Keynote Talk: Matt Hoffman (DeepMind)
Running Many-Chain MCMC on Cheap GPUsTraditional Markov chain Monte Carlo (MCMC) workflows run a small number of parallel chains for many steps. This workflow made sense in the days when parallel computation was limited to the number of cores on one's CPU, since it lets one amortize the cost of burn-in/warmup (i.e., discarding early samples to eliminate bias) over as many useful samples as possible. But modern commodity GPUs (<$500 retail, or a few dollars per hour to rent in the cloud), in conjunction with modern software frameworks like TensorFlow Probability, make it possible to run 50–100 chains in parallel without paying much of a price in wallclock time. For many Bayesian inference problems, this means that we can get reasonably low-variance estimates of posterior expectations using as little as one sample per chain, dramatically reducing the time we spend waiting for results. In this talk, I will present some of our work on MCMC algorithms and workflows that can take full advantage of GPUs. |
10:00-10:30 | Coffee break |
10:30-11:30 |
Oral Session 1 | Reinforcement Learning & OptimizationSession Chair: Ilija Bogunovic
|
11:30-12:30 |
Panel | Establishing Your Own Research Path in the Era of Big ComputePanelists: Matthew D. Hoffman, Aaditya Ramdas, Jennifer Dy, Alexia Jolicoeur-MartineauModerator: Yingzhen Li |
12:30-14:00 | Lunch break |
14:00-15:15 |
Oral Session 2 | OptimizationSession Chair: Ali Vakilian
|
15:15-15:45 | Coffee break |
15:45-17:00 |
Oral Session 3 | Probabilistic MethodsSession Chair: Alexander Terenin
|
17:00-19:00 | Poster session 1 |
Schedule for Day 2: Friday, May 3
Time (CEST) | Day 2: Friday, May 3 |
---|---|
08:00-09:00 | Mentoring Event | TBD |
09:00-10:00 |
Keynote Talk: Aaditya Ramdas (CMU)
Conformal Online Model AggregationConformal prediction equips machine learning models with a reasonable notion of uncertainty quantification without making strong distributional assumptions. It wraps around any black-box prediction model and converts point predictions into set predictions that have a predefined marginal coverage guarantee. However, conformal prediction only works if we fix the underlying machine learning model in advance. A relatively unaddressed issue in conformal prediction is that of model selection and/or aggregation: for a given problem, which of the plethora of prediction methods (random forests, neural nets, regularized linear models, etc.) should we conformalize? This talk presents a new approach towards conformal model aggregation in online settings that is based on combining the prediction sets from several algorithms by voting, where weights on the models are adapted over time based on past performance. |
10:00-10:30 | Coffee break |
10:30-11:30 |
Oral Session 4 | Bandits & CausalitySession Chair: Cem Tekin
|
11:30-12:30 |
Oral Session 5 | CausalitySession Chair: Junpei Komiyama
|
12:30-14:00 | Lunch break & Mentoring Event |
14:00-15:00 | Test of Time Award |
15:00-15:30 |
Oral Session 6 | ApplicationsSession Chair: Gavin Kerrigan
|
15:30-16:00 | Coffee break |
16:00-17:00 |
Oral Session 7 | General Machine LearningSession Chair: Guojun Zhang
|
17:00-19:00 | Poster session 2 |
Schedule for Day 3: Saturday, May 4
Time (CEST) | Day 3: Saturday, May 4 |
---|---|
08:00-09:00 | Mentoring Event | TBD |
09:00-10:00 |
Keynote Talk: Stefanie Jegelka (MIT and TUM)
Learning with Symmetries: Eigenvectors, Graph Representations and Sample ComplexityIn many applications, especially in the sciences, data and tasks have known invariances. Encoding such invariances directly into a machine learning model can improve learning outcomes, while it also poses challenges on efficient model design. In the first part of the talk, we will focus on the invariances relevant to eigenvectors and eigenspaces being inputs to a neural network. Such inputs are important, for instance, for graph representation learning, point clouds and graphics. We will discuss targeted architectures that express the relevant invariances or equivariances - sign flips and changes of basis - and their theoretical and empirical benefits in different applications. Second, we will take a broader, theoretical perspective. Empirically, it is known that encoding invariances into the machine learning model can reduce sample complexity. What can we say theoretically? We will look at example results for various settings and models. This talk is based on joint work with Derek Lim, Joshua Robinson, Behrooz Tahmasebi, Thien Le, Hannah Lawrence, Bobak Kiani, Lingxiao Zhao, Tess Smidt, Suvrit Sra, Haggai Maron and Melanie Weber. |
10:00-10:30 | Coffee break |
10:30-11:30 |
Oral Session 8 | Deep LearningSession Chair: Masashi Sugiyama
|
11:30-12:30 |
Oral Session 9 | StatisticsSession Chair: Dieuleveut Aymeric
|
12:30-14:00 | Lunch break |
14:00-15:00 |
Oral Session 10 | Trustworthy MLSession Chair: Dennis Wei
|
15:00-17:00 | Poster session 3 |