[ Logo] Artificial Intelligence and Statistics 2024

[edit]

Program Schedule

All times are CEST. You can check current CEST time here.

Please also see this link for an interactive calendar containing the schedule.

Registration Desk

Registration desk is open on:

Schedule for Day 1: Thursday, May 2

Time (CEST) Day 1: Thursday, May 2
08:45-09:00 Opening remarks
09:00-10:00 Keynote Talk: Matt Hoffman (DeepMind)
Running Many-Chain MCMC on Cheap GPUs Traditional Markov chain Monte Carlo (MCMC) workflows run a small number of parallel chains for many steps. This workflow made sense in the days when parallel computation was limited to the number of cores on one's CPU, since it lets one amortize the cost of burn-in/warmup (i.e., discarding early samples to eliminate bias) over as many useful samples as possible. But modern commodity GPUs (<$500 retail, or a few dollars per hour to rent in the cloud), in conjunction with modern software frameworks like TensorFlow Probability, make it possible to run 50–100 chains in parallel without paying much of a price in wallclock time. For many Bayesian inference problems, this means that we can get reasonably low-variance estimates of posterior expectations using as little as one sample per chain, dramatically reducing the time we spend waiting for results. In this talk, I will present some of our work on MCMC algorithms and workflows that can take full advantage of GPUs.
10:00-10:30 Coffee break
10:30-11:30
Oral Session 1 | Reinforcement Learning & Optimization Session Chair: Ilija Bogunovic
  • Conformal Contextual Robust Optimization
  • Near-Optimal Policy Optimization for Correlated Equilibrium in General-Sum Markov Games
  • Model-based Policy Optimization under Approximate Bayesian Inference
  • Online Learning of Decision Trees with Thompson Sampling
11:30-12:30
Panel | Establishing Your Own Research Path in the Era of Big Compute Panelists: Matthew D. Hoffman, Aaditya Ramdas, Jennifer Dy, Alexia Jolicoeur-Martineau
Moderator: Yingzhen Li
12:30-14:00 Lunch break
14:00-15:15
Oral Session 2 | Optimization Session Chair: Ali Vakilian
  • The Sample Complexity of ERM in Euclidean Stochastic Convex Optimization
  • Stochastic Methods in Variational Inequalities: Ergodicity, Bias and Refinements
  • Absence of spurious solutions far from ground truth: A low-rank analysis with high-order losses
  • Learning-Based Algorithms for Graph Searching Problems
  • Graph Partitioning with a Move Budget
15:15-15:45 Coffee break
15:45-17:00
Oral Session 3 | Probabilistic Methods Session Chair: Alexander Terenin
  • Neural McKean Vlasov Processes: Distributional Dependence in Diffusion Models
  • Reparameterized Variational Rejection Sampling
  • Intrinsic Gaussian Vector Fields on Manifolds
  • Generative Flow Networks as Entropy-Regularized RL
  • Robust Approximate Sampling via Stochastic Gradient Barker Dynamics
17:00-19:00 Poster session 1

Schedule for Day 2: Friday, May 3

Time (CEST) Day 2: Friday, May 3
08:00-09:00 Mentoring Event | TBD
09:00-10:00 Keynote Talk: Aaditya Ramdas (CMU)
Conformal Online Model Aggregation Conformal prediction equips machine learning models with a reasonable notion of uncertainty quantification without making strong distributional assumptions. It wraps around any black-box prediction model and converts point predictions into set predictions that have a predefined marginal coverage guarantee. However, conformal prediction only works if we fix the underlying machine learning model in advance. A relatively unaddressed issue in conformal prediction is that of model selection and/or aggregation: for a given problem, which of the plethora of prediction methods (random forests, neural nets, regularized linear models, etc.) should we conformalize? This talk presents a new approach towards conformal model aggregation in online settings that is based on combining the prediction sets from several algorithms by voting, where weights on the models are adapted over time based on past performance.
10:00-10:30 Coffee break
10:30-11:30
Oral Session 4 | Bandits & Causality Session Chair: Cem Tekin
  • Positivity-free Policy Learning with Observational Data
  • Best-of-Both-Worlds Algorithms for Linear Contextual Bandits
  • Learning Policies for Localized Interventions from Observational Data
  • Exploration via Linearly Perturbed Loss Minimisation
11:30-12:30
Oral Session 5 | Causality Session Chair: Junpei Komiyama
  • Membership Testing in Markov Equivalence Classes via Independence Queries
  • Causal Modeling with Stationary Diffusions
  • On the Misspecification of Linear Assumptions in Synthetic Controls
  • General Identifiability and Achievability for Causal Representation Learning
12:30-14:00 Lunch break & Mentoring Event
14:00-15:00 Test of Time Award
15:00-15:30
Oral Session 6 | Applications Session Chair: Gavin Kerrigan
  • Equivariant bootstrapping for uncertainty quantification in imaging inverse problems
  • Mixed Models with Multiple Instance Learning
15:30-16:00 Coffee break
16:00-17:00
Oral Session 7 | General Machine Learning Session Chair: Guojun Zhang
  • End-to-end Feature Selection Approach for Learning Skinny Trees
  • Probabilistic Modeling for Sequences of Sets in Continuous-Time
  • Learning to Defer to a Population: A Meta-Learning Approach
  • An Impossibility Theorem for Node Embedding
17:00-19:00 Poster session 2

Schedule for Day 3: Saturday, May 4

Time (CEST) Day 3: Saturday, May 4
08:00-09:00 Mentoring Event | TBD
09:00-10:00 Keynote Talk: Stefanie Jegelka (MIT and TUM)
Learning with Symmetries: Eigenvectors, Graph Representations and Sample Complexity In many applications, especially in the sciences, data and tasks have known invariances. Encoding such invariances directly into a machine learning model can improve learning outcomes, while it also poses challenges on efficient model design. In the first part of the talk, we will focus on the invariances relevant to eigenvectors and eigenspaces being inputs to a neural network. Such inputs are important, for instance, for graph representation learning, point clouds and graphics. We will discuss targeted architectures that express the relevant invariances or equivariances - sign flips and changes of basis - and their theoretical and empirical benefits in different applications. Second, we will take a broader, theoretical perspective. Empirically, it is known that encoding invariances into the machine learning model can reduce sample complexity. What can we say theoretically? We will look at example results for various settings and models. This talk is based on joint work with Derek Lim, Joshua Robinson, Behrooz Tahmasebi, Thien Le, Hannah Lawrence, Bobak Kiani, Lingxiao Zhao, Tess Smidt, Suvrit Sra, Haggai Maron and Melanie Weber.
10:00-10:30 Coffee break
10:30-11:30
Oral Session 8 | Deep Learning Session Chair: Masashi Sugiyama
  • Mind the GAP: Improving Robustness to Subpopulation Shifts with Group-Aware Priors
  • Functional Flow Matching
  • Deep Classifier Mimicry without Data Access
  • Multi-Resolution Active Learning of Fourier Neural Operators
11:30-12:30
Oral Session 9 | Statistics Session Chair: Dieuleveut Aymeric
  • Transductive Conformal Inference with Adaptive Scores
  • Approximate Leave-one-out Cross Validation for Regression with l1 Regularizers
  • Failures and Successes of Cross-Validation for Early-Stopped Gradient Descent in High-Dimensional Least Squares
  • Testing Exchangeability by Pairwise Betting
12:30-14:00 Lunch break
14:00-15:00
Oral Session 10 | Trustworthy ML Session Chair: Dennis Wei
  • Efficient Data Valuation for Weighted Nearest Neighbor Algorithms
  • Operationalizing Counterfactual Metrics: Incentives, Ranking, and Information Asymmetry
  • Joint Selection: Adaptively Incorporating Public Information for Private Synthetic Data
  • Is This Model Reliable for Everyone? Testing for Strong Calibration
15:00-17:00 Poster session 3
This site last compiled Mon, 06 May 2024 10:57:28 -0500
Github Account Copyright © AISTATS 2024. All rights reserved.