Humane Conversations: understanding and mitigating bias in automated AI systems
The AI community has been focusing on developing fixes for harmful bias and discrimination, through so-called ‘debiasing algorithms’ that either try to fix data for known or expected biases, or constrain the outcomes of a given predictive model to produce ‘fair’ outcomes.
Sennay and Hinda argue that creating more AI solutions to fix harmful biases in data is not the only solution that should be pursued. A fundamental question we are facing as researchers and practitioners, is not how to fix harmful bias in AI with new algorithms, but rather; if we should be designing and deploying such potentially biased systems in the first place.
Register via email to firstname.lastname@example.org. A webinar link will be sent after registration.