Applied, Artificial intelligence, Responsible

Responsible Artificial Intelligence in Practice

  • Pascal Wiggers
    Pascal Wiggers
AI has created a wealth of opportunities for innovation across many domains. However, along with these opportunities comes unexpected and sometimes unwanted consequences. For example, algorithms can discriminate or lead to unfair treatment of groups of people. This calls for a responsible approach to AI.

The ECAAI Responsible AI Lab

The need for a responsible approach to AI has been recognized worldwide as reflected by the many manifestos and ethical guidelines that have been developed in the last few years [1]. The European Union for example, calls for Trustworthy AI [2] and defines a number of key requirements such as a need for human agency and oversight, transparency  and accountability. But what does this mean in practice? How can practitioners who want to create trustworthy AI do so? That is the question driving the research of the Responsible AI Lab of the Amsterdam University of Applied Sciences (AUAS).

The Responsible AI Lab is one of seven labs established by the Expertise Centre of Applied AI (ECAAI). The lab researches applied, responsible AI that empowers people and benefits society with a particular focus on the creative industries and the public domain.

Understanding AI in context

Responsible AI means different things to different people. For us, responsible AI starts with the realization that AI systems impact people’s lives in both expected and unexpected ways. This is true for all technology, but what makes AI different is that a system can learn the rules that govern its behaviour and that this behaviour may change over time. In addition, many AI systems have a certain amount of agency to come to conclusions or actions without human interference.

To better understand this impact, one needs to study an AI system in context and through experiment. Next to an understanding of the technology, this also requires an understanding of the application field and the involvement of the (future) users of the technology.

AI is not neutral

There has been much attention on bias, unfairness and discrimination by AI systems, a recent example is the problem with face recognition on Twitter and Zoom. What we see here is that data mirrors culture, including prejudices, conscious and unconscious biases and power structures, and the AI picks up these cultural biases. So, bias is a fact of life, not just an artifact of some data set. 

The same holds for another form of bias, or rather subjectivity, that influences the impact an AI system may have: the many decisions, large and small, taken in the design and development process of such a system. Imagine for example a recommendation system for products or services, such as flights. The order in which the results are shown may influence the amount of clicks each receives and by that the profit of the competing vendors. Any choice made during the design process will have an effect, however small. Ideally, designers and developers reflect upon such choices during development. That in itself is difficult enough, but for AI systems that learn part of their behaviour from data, this is even more challenging.

Tools for Responsible AI

To develop responsible AI systems worthy of our trust, practitioners need tools to [4]:

  1. Understand and deal with the preexisting cultural bias that a system may pick up
  2. Reflect upon and deal with the bias introduced in the development process
  3. Anticipate and assess the impact an AI system has during deployment

Tools can take several shapes. They include responsible algorithms, such as algorithms that provide an explanation of the choices made by an AI system or algorithms that optimize, among other things, for fairness to ensure that the outcomes will not benefit one group of people more than others. Tools may also take the form of assessment or auditing tools that test AI algorithms for particular forms of bias. Such tools can be used during development and deployment to see if any changes to the system may result in unwanted outcomes.

Both types of tools can help in achieving responsible AI, but technology alone can take us only so far in dealing with bias. As bias reflects culture, it takes human understanding to make informed choices. Therefore, responsible AI tools also include best practices, design patterns [5] and in particular design methodologies. These range from co-creation workshop format to prototyping and checklists that help to explicate the values that are now implicitly embedded in technology. These methodologies help to critically reflect upon these values and to design and implement AI from desired values while giving the end users a voice throughout the development and deployment of AI applications.

Responsible AI research now

At ECAAI, and in particular in the Responsible AI Lab, we are doing research with practitioners from different domains to develop and evaluate all three types of tools: responsible AI algorithms, automated assessment tools and AI design methodologies. We want to ensure that the AI that surrounds us will be the AI we want to live with. For example, together with Dutch broadcasting organisations NPO and RTL and the Applied Universities of Rotterdam and Utrecht we are developing design tools for pluriform recommendation systems and for inclusive language processing. Furthermore, we are working with the City of Amsterdam to research how to guarantee inclusion and diversity in AI systems for recruitment.

If you are interested in collaborating with the Responsible AI Lab, please contact: appliedai@hva.nl or p.wiggers@hva.nl.

References

[1] See: https://aiethicslab.com/big-picture/ for an overview of AI principles developed worldwide.
[2] Independent High-Level Expert Group on Artificial Intelligence (2019) Ethics Guideline for Trustworthy AI
[4] The three sources of bias described here are based on: Batya Friedman & Helen Nissenbaum (1996). Bias in Computer Systems. ACM Transactions on Information Systems, Vol. 14(3), pp. 330-347.
[5] See for examples of design patterns and assessment tools for example the projects of the The MozFest ‘building trustworthy AI working group’. https://www.mozillafestival.org/en/get-involved/building-trustworthy-ai-working-group/

Read More

  • Managing/Being a Master’s Student during a Pandemic

    For the past five years, Elsevier has been an enthusiastic participant in the UvA Master’s Student programme. In total, more than 45 students have been supervised by researchers across the company, which has led to 12 new recruits for our Data Science teams.

    • Magdalena Mladenova
      Magdalena Mladenova
    • Anita de Waard
      Anita de Waard
    • Thom Pijnenburg
      Thom Pijnenburg
  • Data as a material for fashion: How treating data as a material enables a new future for design

    Data Science is rapidly changing industries around the world, yet the digital transformation remains difficult for Fashion. Fashion (Design, Business, Branding, and Marketing) has never been known for maths geniuses. (There are a few, but they keep it a secret.) While maths and data may not be a given in the industry, people who work in fashion are material experts. So what would it mean if we treated data as if it were a material?

    • Troy Nachtigall
      Troy Nachtigall
  • Programming Training for Refugees

    In August 2020, VodafoneZiggo and Accenture wrapped up their three-month CodeMasters training programme for refugees. The training course was tailor-made to help refugees integrate in the Dutch labour market by teaching participants to write computer code.

    • Gabriel Lopez
      Gabriel Lopez