AI Tech Week: Research

Interested in state-of-the-art research in Amsterdam? Four ICAI Amsterdam Labs will talk about the research they are undertaking in collaboration with academia, industry and government. We cover it all in this afternoon event about AI research.

It’s time for the first ever AI Tech Week! Perspectives from research, entrepreneurship, talent and applications will all be covered during four different afternoon events. Do we see you there?


16:30 Welcome & Introduction
16:35 Talk #1 Algorithmic fairness in wild: a conversation
16:55 Q&A
17:05 Talk #2 ICAI Elsevier Discovery Lab: Driving Scientific Discovery through Machine Intelligence
17:25 Q&A
17:35 Coffee Break
17:40 Talk #3: Real-World Learning
18:00 Q&A
18:10 Talk #4: Learning from Controlled Sources
18:30 Q&A
18:40 Closing
18:45 End!

Chair: Maarten de Rijke
Professor of Artificial Intelligence and Information Retrieval

Talk #1 by Hinda Haned
Algorithmic fairness in wild: a conversation:
This talk is a conversation between an industry practitioner (Hinda Haned) and a fair AI PhD researcher (Sara Altamirano) around the question: how can we make AI-driven systems fair in practice? As the AI community is increasingly devising new debiasing or algorithmic fixes, we revisit what it means from the data scientist’s perspective what fair AI or debiasing AI entails in the day-to-day work. Through use cases, we will illustrate the challenges of ensuring fair AI in practice and the current gap in how this issue cannot be fixed through the algorithmic lens alone.

Talk #2 by Micheal Cochez
ICAI Elsevier Discovery Lab: Driving Scientific Discovery through Machine Intelligence:
At the discovery lab, we study technology, infrastructure and methods to develop intelligent services for researchers, focusing on finding and interpreting scientific literature, to formulate hypotheses, and to interpret data. The lab operates at the crossroads of Knowledge Representation, Machine Learning and Natural Language Processing and we are advancing the ability to construct, use and study large-scale research knowledge graphs that integrate knowledge across heterogeneous scientific content and data.
This allows for a deeper, richer use of content and data across a larger span of domains than possible thus far, and provides better recommendations, more contextual question answering, more successful query construction, and the automatic generation of hypotheses. In this talk we will discuss more on the research we do and how the outcomes are applied at Elsevier.

Talk #3 by Cees Snoek
Real-World Learning:
Progress in artificial intelligence has been astonishing in the past decade. Cars self-driving on highways, machines beating go-masters, and cameras categorizing images in a pixel-precise fashion are now common place, thanks to data-and-label supervised deep learning. Despite the impressive advances, it is becoming increasingly clear that deep learning networks are heavily biased towards their training conditions and become brittle when deployed under real-world situations that differ from those perceived during learning in terms of data, labels and objectives. Simply scaling-up along all dimensions at training time seems a dead end, not only because of the compute, storage and ethical expenses, but especially as humans are easily able to generalize robustly in a data-efficient fashion. Several learning paradigms have been proposed to account for the limitations of deep learning with the i.i.d. assumption. Shifting data distributions are attacked by domain adaptation and domain generalization, changing label vocabularies are the topic of interest in zero-shot and open world learning, while varying objectives are covered in meta-learning and continual learning regimes. However, there is as of yet no learning methodology that can dynamically learn to generalize and adapt across domains, labels and tasks simultaneously, and do so in a data-efficient and fair fashion. This is the ambitious long-term goal of ‘real-world learning’ and I will present some initial works from my lab to achieve it.

Talk #4 by Onno Zoeter
Learning from Controlled Sources:
The classic supervised learning problem that is taught in machine learning courses and is the subject of many machine learning competitions, is often too narrow to reflect the problems that we face in practice. Historical datasets typically reflect a combination of a source of randomness (for example customers making browsing and buying decisions) and a controlling mechanism such as a ranker or highlighting heuristics (badges, promotions, etc.). Or there might be a selection mechanism (such as the decision to not accept transactions with high fraud risk) that influences the training data. A straightforward regression approach would not be able to disentangle the influence of the controller and phenomenon under study. As a result it risks making incorrect predictions as the controller is changed.

Register now!