ADS Webinar | Responsible and Ethical use of AI

In this webinar, we explored how we can use AI responsibly and ethically. Particularly initiatives that deal with sensitive data or have a significant impact on people.

Watch the full webinar.

Richard Benjamins is Chief AI & Data Strategist at Telefonica. He is among the 100 most influential people in data-driven business (DataIQ 100, 2018). He is also co-founder and Vice President of the Spanish observatory for ethical and social impacts of AI (OdiseIA). He was Group Chief Data Officer at AXA (Insurance) and before that held for 10 years executive positions at Telefonica on Big Data and Analytics. He is the founder of Telefonica’s Big Data for Social Good department, a member of the B2G data-sharing Expert Group of the EC, and a frequent speaker on Artificial Intelligence events. He holds a PhD in Cognitive Science, has published over 100 scientific articles, and is author of the books “The myth of the algorithm: tales and truths of artificial intelligence” (Spanish) and “A Data-Driven Company” (forthcoming). He is strategic advisor of several start-ups, including BigML, Focus360 and Nexus Clips.

In the past few years, several large companies have published ethical principles of AI. National governments, the European Commission, and inter-governmental organisations have come up with requirements to ensure the good use of AI. However, individual organisations that want to join this effort, are faced with many unsolved questions. In this talk, Richard proposed guidelines for organisations committed to the responsible use of AI, but lack the required knowledge and experience. The guidelines consist of two parts: i) helping organisations to decide what principles to adopt, and ii) a methodology for implementing the principles in organisational processes.

Aysenur Bilgin is a Data Scientist with an interdisciplinary interest in design, development and deployment of data-driven solutions. She has a background in computer engineering and adaptive intelligent systems; and holds a PhD in computer science from University of Essex (2015). She spent 3 years as a postdoc at CWI where she conducted Responsible AI research in collaborations with Elsevier and the National Library of the Netherlands. Her recent work at VIQTOR DAVIS focuses on tool development for operationalizing Responsible AI through quantifying and benchmarking responsible data science practices.

Responsible AI is an umbrella term that brings together a variety of principles and practices for the purpose of making AI understandable, inclusive, safe, acceptable and equally beneficial for all people. Despite the growing awareness and the proliferation of guidelines as well as technical approaches that aim to operationalise Responsible AI, it still remains a challenge to translate these into real-world best practices and to assess such initiatives on a large-scale. In response, Aysenur spoke about the assessment tool that they have designed and developed that not only offers quantifying and benchmarking responsibility in AI initiatives but also empowers data (science) teams to identify and perform actionable improvements. She presented the four pillars of Responsible AI, namely Fairness, Accountability, Confidentiality and Transparency (FACT). Building on these, she introduced AI FACT CHECKER, an assessment tool for Responsible AI and expand on its current and future capabilities.

Read More