[edit]
AISTATS-97 Tutorial Program
NOTE: we have changed the schedule for the Tutorial Program as of November 7th: due to demand we have rescheduled so that attendees can attend both the Dawid and Jordan tutorials. The Mitchell and West tutorials will still be in parallel. Tutorial fees remain the same: attendees now get to attend 3 tutorials instead of just 2 with no change in cost.
General Information
- Each tutorial will last about 3 hours.
- Your tutorial attendance fee (see the registration form and instructions for registering ) covers attendance at each of the Dawid and Jordan tutorials and one of the Mitchell and West tutorials (which will be held in parallel), plus a copy of notes from the tutorial speakers for all 4 tutorials.
- Date: January 4th 1996
- Location: Radisson Bahia Mar Beach Resort
Tutorial A: 8:30 to 11:30am
Conditional Independence for Statistics and AI
A. P. Dawid, University College London
The axiomatic theory of Conditional Independence provides a general
language for formulating and determining questions relating to the
intuitive idea of "relevance", in a wide variety of different
contexts. This tutorial will describe the basic theory and various
interesting models of it, with special emphasis on its use in
conjunction with modular graphical representations of problems in
Probability, Statistics and Expert Systems.
Tutorial B: 12:30 to 3:30pm
Bayesian Time Series Analysis and Forecasting
Mike West, Duke University
This tutorial will overview the historical development and current status of Bayesian approaches
to time series modelling and analysis. Particular emphasis will be given to the conceptual bases
and foundations in: time-varying parameter models, sequential modelling and adaptive learning,
interventionist ideas and tools, and component model structuring. Developments of specific model
classes will be given with illustrative applications, leading through standard dynamic linear models,
non-linear and mixture models, multivariate models, and others. This will be complemented with
discussion of recent developments, especially in computation and simulation, and current research frontiers.
Tutorial C: 12:30 to 3:30pm
Learning in Information Agents
Tom M. Mitchell, Carnegie Mellon University
What kind of software agents should we create for our workstations and
the Internet over the next few years, and what role should be played
by AI and statistical methods in these systems? This tutorial will
examine some recent examples of software agents that learn from and
about users. For example, we will cover a newsreader that
automatically learns users' reading interests, and a web browser that
learns which hyperlinks to suggest based on user interests.
Underlying these applications are interesting new research problems
for machine learning and statistics, such as how best to learn when
the data consists of text, images, and other unstructured data, how to
learn from multiple sources of information across multiple users, and
how best to adapt to changing environments and users interests. This
tutorial will be part overview of known approaches, and part
discussion of open research issues.
Tutorial D: 4:00 to 7:00pm
Graphical models, neural networks and
machine learning algorithms
Michael Jordan, MIT
What are the commonalities between graphical models, neural
networks and other network-based statistical and machine
learning methods? More importantly, what are the strengths
of the ideas developed thus far by the various research communities
that should be retained as we consider a more unified methodology?
I will present a tutorial on probabilistic learning systems
that aims at a unified perspective on network-based modeling.
I emphasize graphical models as providing the basic formalism,
but I will also emphasize the nonparametric and (particularly)
semiparametric methods characteristic of the neural network
literature. Examples discussed will include Bayesian belief
networks with logistic or noisy-OR nodes, Hidden Markov models
(including several ``intractable'' variations of HMM's), mixture
models, probabilistic decision trees, Helmholtz machines and
variations on Kalman filters and Markov random fields. I will
provide a detailed discussion of algorithms for inference and
learning in these models, including exact probabilistic
calculations, sampling methods and mean field methods.