Convex Techniques for Learning with Latent Variables

Stay connected



Share on facebook
Share on twitter
Share on linkedin

CIS Colloquium, Mar 05, 2008, 03:00PM – 04:00PM, Wachman 447

Convex Techniques for Learning with Latent Variables

Dr. Yuhong Guo, University of Alberta

In this talk, I will present my recent work — a novel approach for training probabilistic models in the presence of hidden variables. In particular, I will discuss a convex relaxation of Viterbi EM — a variant of the standard expectation-maximization (EM) algorithm prevalent in the machine learning and statistics literature. However, I first present a cautionary result that any convex relaxation of latent variable training over hidden variables must give trivial results if any direct dependence on the missing values is retained. I then demonstrate how the problem can be bypassed by using equivalence relations. In particular, I will demonstrate new algorithms for estimating exponential conditional models that only require equivalence relation information over the variable values. This reformulation leads to an exact expression for EM variants in a wide range of problems. Finally, a semidefinite relaxation can be derived that yields global training by eliminating local minima.

This work is ongoing, and part of my larger research program on learning with latent variables. Time permitting, I will also mention my most recent results on dimensionality reduction using convex techniques.

During my PhD study, I have also published papers on Bayesian network structure learning, large margin training for probabilistic models, active learning and ensemble learning. A particular application interest I have had has been bioinformatics, where I have applied some of my methods to inferring gene regulatory networks from time series expression data. This motivates a future research agenda that focuses on fundamental machine learning research with applications to problems in bioinformatics.

Yuhong Guo received her B.S. and M.Eng. in Computer Science from NanKai University, China, in 1998 and 2001, respectively. She received her Ph.D. degree in Fall, 2007 from the University of Alberta, Canada. Since then, she has been a postdoc at the University of Alberta. She has published extensively in Machine Learning, and recently in Bioinformatics, and she won the Distinguished Paper Prize at IJCAI 2005.