Lunch at ICAI: AI & the Public Sector in NL

This Lunch at ICAI session is focused on AI technology in the public sector. The Cultural AI lab and the Civic AI lab will each present their work and discuss the challenges and developments made in this field.

Programme 

12.00: Introduction Cultural AI lab by Jacco van Ossenbruggen (VU)

12:05: Andrei Nesterov (CWI) about ‘Detecting and modelling contentious words in cultural heritage collections’

12.20: Introduction Civic AI lab by Sennay Ghebreab (UvA)

12:25: Emma Beauxis-Aussalet (VU) about ‘Modelling and Explaining AI Error and Bias’

12.40: Discussion what’s next for AI in the public sector

13.00: End

 

Talk #1 by Andrei Nesterov

One of the key questions in the Cultural AI Lab is how AI can take the complexity of cultural contexts into account and show different perspectives on digitised artefacts in heritage collections? In short: How can AI be “culturally aware”? As the first step to this question, they investigate the statistical and symbolic approaches to deal with the usage of outmoded, inaccurate, and offensive words (we call such words contentious) in heritage collections and descriptions. For example, in which contexts might using the word ‘exotic’ be problematic?
During the ICAI Lunch, they will talk about how they use crowdsourcing and domain expert knowledge in 2 parallel but interconnected projects: (1) detecting contentious terms in cultural heritage collections (historical newspaper articles) and (2) modelling the usage of such terms in different contexts.

Talk #2 by Emma Beauxis-Aussalet

Systematic error discrepancies between populations create bias, discrimination, and fairness issues. Such issues are largely discussed in high-level guidelines for ethical computing. Yet we are lacking scalable methods to manage these issues in practice. One line of research at the Civic AI Lab aims at bridging this gap with algorithm-agnostic methods that i) model the patterns of error; ii) explain the features that underlie the patterns of errors; iii) consider human bias in groundtruth data; and iv) that are scalable to a large range of systems used by public institutions. Our goal is to develop comprehensive sets of methods that support transparency, non-discrimination, and due process, and that inform policies for responsible AI.