Explaining AdaBoost

Stay connected

EVENTS

NEWS

Share on facebook
Share on twitter
Share on linkedin

CIS Distinguished Lecture Series, Nov 05, 2012, 11:00AM – 12:00PM, Tech Center 111

Explaining AdaBoost

Robert Schapire, Princeton University

Abstract:
Boosting is an approach to machine learning based on the idea of creating a highly accurate prediction rule by combining many relatively weak and inaccurate rules. AdaBoost, the first practical boosting algorithm, has enjoyed empirical success in a number of fields, and a remarkably rich theory has evolved to try to understand how and why it works, and under what conditions. At various times in its history, AdaBoost has been the subject of controversy for the mystery and paradox it seems to present with regard to this question. This talk will give a high-level review and comparison of the varied attempts that have been made to understand and “explain” AdaBoost. These approaches (time permitting) will include: direct application of the classic theory of Vapnik and Chervonenkis; the margins theory; AdaBoost as a loss-minimization algorithm (possibly implicitly regularized); and AdaBoost as a universally consistent method. Both strengths and weaknesses of each of these will be discussed.

Bio:
Robert Schapire received his ScB in math and computer science from Brown University in 1986, and his SM (1988) and PhD (1991) from MIT under the supervision of Ronald Rivest. After a short post-doc at Harvard, he joined the technical staff at AT&T Labs (formerly AT&T Bell Laboratories) in 1991 where he remained for eleven years. At the end of 2002, he became a Professor of Computer Science at Princeton University. His awards include the 1991 ACM Doctoral Dissertation Award, the 2003 Gödel Prize and the 2004 Kanelakkis Theory and Practice Award (both of the last two with Yoav Freund). His main research interest is in theoretical and applied machine learning.