Graphical Modeling and Statistical Inference

Stay connected

EVENTS

NEWS

Share on facebook
Share on twitter
Share on linkedin

CIS Colloquium, Feb 17, 2015, 11:00AM – 12:00PM, SERC 306

Graphical Modeling and Statistical Inference

Dr. Tony Jebara , Columbia University

Abstract:
Graphical models and Markov random fields are fundamental tools in machine learning with broad application in areas such as computer vision, speech recognition, and computational biology. We can cast social networks like FaceBook and Epinions as large graphical models where nodes are individuals connected by friendship edges. We can also cast power networks as large graphical models where edges indicate electrical cables between transformer nodes. There are two canonical forms of inference in graphical modeling: maximum a posteriori (MAP) inference, where the most likely configuration is returned; and marginal inference, where a probability distribution over a subset of variables is returned. Both problems are NP-hard when the graphical models have cycles and/or large tree-width. Since learning often reduces to MAP or marginal inference it is also NP-hard to learn general graphical models. How can we tractably solve these problems in practice? Heuristics like sum-product and max-product propagation often work reasonably but formal guarantees have so far been elusive. I will provide formal tractability guarantees by compiling MAP estimation and Bethe marginal inference into a classical problem in combinatorics: finding the maximum weight stable set (MWSS) in a graph. Whenever a graph is perfect, the MWSS problem is solvable in polynomial time. Example problems such as matchings, b-matchings, and submodular models are shown to give a perfect MWSS. By making connections to perfect graph theory, new graphical models are shown to compile into perfect graphs and therefore admit tractable inference. Applications include friendship link recommendation, label prediction in social networks, influence estimation in social networks, and failure prioritization of transformers in power network.

Bio:
Tony Jebara is Associate Professor of Computer Science at Columbia University. He chairs the Center on Foundations of Data Science as well as directs the Columbia Machine Learning Laboratory. His research intersects computer science and statistics to develop new frameworks for learning from data with applications in social networks, spatio-temporal data, vision and text. Jebara has founded and advised several startups including Sense Networks (acquired by yp.com), AchieveMint, Agolo, Ufora, and Bookt (acquired by RealPage NASDAQ:RP). He has published over 100 peer-reviewed papers in conferences, workshops and journals including NIPS, ICML, UAI, COLT, JMLR, CVPR, ICCV, and AISTAT. He is the author of the book Machine Learning: Discriminative and Generative and co-inventor on multiple patents in vision, learning and spatio-temporal modeling. In 2004, Jebara was the recipient of the Career award from the National Science Foundation. His work was recognized with a best paper award at the 26th International Conference on Machine Learning, a best student paper award at the 20th International Conference on Machine Learning as well as an outstanding contribution award from the Pattern Recognition Society in 2001. Jebara’s research has been featured on television (ABC, BBC, New York One, TechTV, etc.) as well as in the popular press (New York Times, Slash Dot, Wired, Businessweek, IEEE Spectrum, etc.). He obtained his PhD in 2002 from MIT. Esquire magazine named him one of their Best and Brightest of 2008. Jebara has taught machine learning to well over 1000 students (through real physical classes). Jebara was a Program Chair for the 31st International Conference on Machine Learning (ICML) in 2014. Jebara was Action Editor for the Journal of Machine Learning Research from 2009 to 2013, Associate Editor of Machine Learning from 2007 to 2011 and Associate Editor of IEEE Transactions on Pattern Analysis and Machine Intelligence from 2010 to 2012. In 2006, he co-founded the NYAS Machine Learning Symposium and has served on its steering committee since then.