ADS Webinar | Responsible and Ethical use of AI

In this webinar, we explored how we can use AI responsibly and ethically. Particularly initiatives that deal with sensitive data or have a significant impact on people.

Watch the full webinar.

Richard Benjamins is Chief AI & Data Strategist at Telefonica. He is among the 100 most influential people in data-driven business (DataIQ 100, 2018). He is also co-founder and Vice President of the Spanish observatory for ethical and social impacts of AI (OdiseIA). He was Group Chief Data Officer at AXA (Insurance) and before that held for 10 years executive positions at Telefonica on Big Data and Analytics. He is the founder of Telefonica’s Big Data for Social Good department, a member of the B2G data-sharing Expert Group of the EC, and a frequent speaker on Artificial Intelligence events. He holds a PhD in Cognitive Science, has published over 100 scientific articles, and is author of the books “The myth of the algorithm: tales and truths of artificial intelligence” (Spanish) and “A Data-Driven Company” (forthcoming). He is strategic advisor of several start-ups, including BigML, Focus360 and Nexus Clips.

In the past few years, several large companies have published ethical principles of AI. National governments, the European Commission, and inter-governmental organisations have come up with requirements to ensure the good use of AI. However, individual organisations that want to join this effort, are faced with many unsolved questions. In this talk, Richard proposed guidelines for organisations committed to the responsible use of AI, but lack the required knowledge and experience. The guidelines consist of two parts: i) helping organisations to decide what principles to adopt, and ii) a methodology for implementing the principles in organisational processes.

Aysenur Bilgin is a Data Scientist with an interdisciplinary interest in design, development and deployment of data-driven solutions. She has a background in computer engineering and adaptive intelligent systems; and holds a PhD in computer science from University of Essex (2015). She spent 3 years as a postdoc at CWI where she conducted Responsible AI research in collaborations with Elsevier and the National Library of the Netherlands. Her recent work at VIQTOR DAVIS focuses on tool development for operationalizing Responsible AI through quantifying and benchmarking responsible data science practices.

Responsible AI is an umbrella term that brings together a variety of principles and practices for the purpose of making AI understandable, inclusive, safe, acceptable and equally beneficial for all people. Despite the growing awareness and the proliferation of guidelines as well as technical approaches that aim to operationalise Responsible AI, it still remains a challenge to translate these into real-world best practices and to assess such initiatives on a large-scale. In response, Aysenur spoke about the assessment tool that they have designed and developed that not only offers quantifying and benchmarking responsibility in AI initiatives but also empowers data (science) teams to identify and perform actionable improvements. She presented the four pillars of Responsible AI, namely Fairness, Accountability, Confidentiality and Transparency (FACT). Building on these, she introduced AI FACT CHECKER, an assessment tool for Responsible AI and expand on its current and future capabilities.

Read More

  • CWI spin-off DuckDB Labs partners with Motherduck, which raises $47.5 million

    CWI spin-off company DuckDB Labs helped create startup MotherDuck which aims to connect DuckDB to the cloud. MotherDuck sports some big names: its CEO is Jordan Tigani, founding engineer at Google’s BigQuery, Google’s fully managed data analysis platform. A big part of the $47.5 million funding comes from Andreessen Horowitz, a prominent venture capital firm, specialized in technology startups.

  • The start of a new international research community: HHAI Conference 2022’

    Breaking ground as the first international conference on Hybrid Human Artificial Intelligence, HHAI22 held its first-ever in-person meeting in Amsterdam in the summer of 2022, establishing the beginnings of an international research community. ADS contributed to the conference by hosting a Meetup around the topic of Hybrid Intelligence.

  • Lectoral Speech Nanda Piersma

    There are legal rules and ethical frameworks, but little or no practical guidance on responsible design. In her inaugural lecture “System error, please restart”, Nanda Piersma argues what ‘responsible’ means and how we can carry out the (further) development of IT systems in such a way that it also earns our trust.