"Regret-based Methods for Decision Making
and Preference Elicitation "
Preference elicitation is generally required when making or recommending
decisions on behalf of users whose utility function is not known with
certainty. Although one can engage in elicitation until a utility
function is perfectly known, in practice, this is infeasible. Thus
methods for decision making with imprecise utility functions are needed.
We propose the use of minimax regret as an appropriate decision
criterion in this circumstance, providing the means for determining
robust decisions. We overview recent techniques we have developed for
minimax regret computation in several different settings, with a focus
of methods that exploit graphical utility models. We also describe how
minimax regret can be used to drive the process of eliciting preferences
Craig Boutilier is a Professor and Chair of the Department of Computer
Science at the University of Toronto. He received his Ph.D. in Computer
Science from the University of Toronto in 1992, and worked as an
Assistant and Associate Professor at the University of British Columbia
from 1991 until his return to Toronto in 1999. Boutilier was a
consulting professor at Stanford University from 1998-2000, and has served
on the Technical Advisory Board of CombineNet, Inc. since 2001.
Boutilier's research interests have spanned a wide range of topics, from
knowledge representation, belief revision, default reasoning, and
philosophical logic, to probabilistic reasoning, decision making under
uncertainty, multiagent systems, and machine learning. His current
research efforts focus on various aspects of decision making under
uncertainty: Markov decision processes, game theory and multiagent
decision processes, economic models, reinforcement learning,
probabilistic inference and preference elicitation.
"Probabilistic Models for Identifying Regulation Networks:
Qualitative to Quantitative Models"
Microarray-based hybridization methods techniques allow to
simultaneously measure the expression level for thousands of genes.
Such measurements contain information about many different aspects of
gene regulation and function, and indeed this type of experiments has
become a central tool in biological research. A major computational
challenge is extracting new biological understanding from this wealth
Our goal is to understand the regulatory processes that bring about
the observed expression patterns. This involves uncovering the
structure of the interactions between genes, the function of different
regulators, the mechanisms by which it influences its targets, and the
dynamics of the process. Answers to these questions can come at
different levels of details, depending on the available data, the
modeling assumptions, and prior knowledge. In my talk, I will
describe an ongoing project to use probabilistic graphical models,
such as Bayesian networks and their extensions of them to model and
reverse engineer regulatory networks from expression data.
I will explain the basic foundations of the approach, the model
choices in defining the modeling language and in learning models from
data, and methods to visualize and interpret the learned models to
extract additional biological insight. I will present a progression
of models that capture different aspect of gene regulation, and an
assessment of their performance on several large scale yeast gene
This is joint work with Dana Pe'er, Iftach Nachman, Aviv Regev, Eran
Segal, Micha Shapira, David Botstein, and Daphne Koller.
"Information, transfer, and semi-supervised learning"
"Identification and Separation of DNA Mixtures using Peak Area Information"
The lecture will describe and discuss how probabilistic expert systems can
to analyse forensic identification problems involving DNA
mixture traces using quantitative peak area information.
Peak area is modelled with conditional Gaussian distributions or with models
based on the gamma distribution. Such systems can be used for ascertaining
whose profiles have been measured, have contributed to the mixture, but also
DNA profiles of unknown contributors by separating the mixture
into its individual components. The potential of this
methodology is illustrated on case data examples and compared
with alternative approaches. The advantages are
that identification and separation issues can be handled in a unified way
within a single network model and the uncertainty associated with the
analysis is quantified. The lecture is almost entirely based upon joint work
with Robert Cowell and Julia Mortera.
Steffen Lauritzen is educated in mathematical statistics at the University
of Copenhagen and was appointed there until 1981, when he moved to the
University of Aalborg to a position as Professor in Mathematics and
Statistics. Recently he took up post as Professor of Statistics and Fellow of
Jesus College at the University of Oxford, UK. He is mostly well known
for his work on graphical
models and their applications to probabilistic expert systems.
"Some intuitions about message
I will give an intuitive perspective of message passing algorithms,
including loopy belief propagation, generalized belief propagation,
variational message passing, and expectation propagation. I will show
how to view them in a unified way, give a feel for where they work,
and when you should use one method over another. I will also discuss
when message passing is a good way to do inference at all, and what
are the main problems with it.
Thomas Minka is a researcher at Microsoft Research Cambridge. From
2001--2003 he was a visiting assistant professor in statistics at
Carnegie Mellon University. He earned the S.B., M.Eng., and Ph.D. in
Electrical Engineering and Computer Science at MIT.
Dr Minka has published papers on methods for Bayesian inference,
document image parsing, document retrieval, and image retrieval. He
won the best paper award at SIGIR'02 and Best Pattern Recognition
Paper of 1997. He introduced the Expectation Propagation algorithm in
his dissertation, entitled "A family of algorithms for approximate