Solving association problems with Convex Co-embedding
Halls department, Hall 3
Date and Time
Wednesday, 27 December 2017
17:00 - 18:00
Co-embedding is the process of mapping elements from multiple sets into a common latent space, which can be exploited to infer element-wise associations by considering the geometric proximity of their embeddings. Such an approach underlies the state of the art for link prediction, relation learning, multi-label tagging, relevance retrieval and ranking. In this talk, first, a unifying view for solving association problems with co-embedding is presented, which covers both alignment-based and distance-based models. Although current approaches rely on local training methods applied to non-convex formulations, I demonstrate how general convex formulations can be achieved for co-embedding. Next, the connection between metric learning and co-embedding is investigated. I show that heterogeneous metric learning can be cast as distance-based co-embedding, and propose a scalable algorithm for solving the training problem globally. I investigate the relation between the standard non-convex training formulation and the proposed convex reformulation of heterogeneous metric learning. Finally, a constrained form of co-embedding is presented for structured output prediction. A key bottleneck in structured output prediction is the need for inference during training and testing, usually requiring some form of dynamic programming. Rather than using approximate inference or tailoring a specialized inference method for a particular structure I instead pre-compile prediction constraints directly into the learned representation.
Farzaneh Mirzazadeh received her Ph.D. degree in machine learning in 2017 from the Department of Computing Science at the University of Alberta in Canada. She received her M.Sc. degree from the same department in 2010 and also from the Department of Computer Engineering at the Sharif University of Technology in 2008 with a focus on artificial intelligence. Her Ph.D. thesis focuses on solving association problems with convex co-embedding and is nominated for the outstanding Ph.D. dissertation award by the examining committee. She is the recipient of several scholarships and awards including the University of Alberta's president's doctoral prize of distinction, the Natural Sciences and Engineering Research Council of Canada (NSERC)'s Post Graduate Scholarship, Queen Elizabeth II Graduate Scholarship and Computing Science GPA Award. Her work is published in top peer-reviewed machine learning conferences such as NIPS and AAAI. She has served as a program committee member for AISTATS and AAAI 2017-2018 conferences and as a reviewer for journals and conferences including IEEE Transactions on Knowledge and Data Engineering, IJCAI, and COLT. She has taught machine learning as a lecturer in University of California in Santa Cruz in Jan 2017. Starting Jan 2018, she is joining MIT-IBM Watson AI Lab in Cambridge, MA to pursue research in artificial intelligence.