ICAI MMLL Causality-inspired ML: what can causality do for ML? The domain adaptation case.

The ICAI Mercury Machine Learning Lab is a collaboration between the University of Amsterdam, Delft University of Technology, and Booking.com. The lab focuses on the development and applications of artificial intelligence to the specific domain of online travel booking and recommendation service systems.

ICAI MMLL Causality-inspired ML: what can causality do for ML? The domain adaptation case.

The Mercury Machine Learning Lab combines academic expertise in causality, information retrieval, natural language processing, and reinforcement learning with the unique expertise, experience, and availability of big data at Booking.com


16:00 Introduction by Prof. Frans A. Oliehoek (TUDelft)
16:05 – 16:50 Dr. Sara Magliacane (UvA):”Causality-inspired ML: what can causality do for ML? The domain adaptation case”
16.45 Q&A
17:05 End


More info about this event

Causality-inspired ML: what can causality do for ML? The domain adaptation case. “

Applying machine learning to real-world cases often requires methods that are robust w.r.t. heterogeneity, missing not at random or corrupt data, selection bias, non i.i.d. data etc. and that can generalize across different domains. Moreover, many tasks are inherently trying to answer causal questions and gather actionable insights, a task for which correlations are usually not enough. Several of these issues are addressed in the rich causal inference literature. On the other hand, often classical causal inference methods require either a complete knowledge of a causal graph or enough experimental data (interventions) to estimate it accurately.

Recently, a new line of research has focused on causality-inspired machine learning, i.e. on the application ideas from causal inference to machine learning methods without necessarily knowing or even trying to estimate the complete causal graph. In this talk, I will present an example of this line of research in the unsupervised domain adaptation case, in which we have labelled data in a set of source domains and unlabelled data in a target domain (“zero-shot”), for which we want to predict the labels. In particular, given certain assumptions, our approach is able to select a set of provably “stable” features (a separating set), for which the generalization error can be bound, even in case of arbitrarily large distribution shifts. As opposed to other works, it also exploits the information in the unlabelled target data, allowing for some unseen shifts w.r.t. to the source domains. While using ideas from causal inference, our method never aims at reconstructing the causal graph or even the Markov equivalence class, showing that causal inference ideas can help machine learning even in this more relaxed setting.

About Dr. Sara Magliacane
Dr. Sara Magliacane an assistant professor at the University of Amsterdam in the INDElab working on causality. She received her PhD from the VU University in the Knowledge Representation and Reasoning Group and did a postdoc at UvA in the Causality group in AMLab. Until recently, she was a Research Staff Member at the MIT Watson AI Lab in Cambridge, Massachusetts, and before that a postdoc at IBM Research in Yorktown Heights, NY.