LunCH Lecture Computer Science: Algorithm Audit

AI fairness: where the quantitative and qualitative reasoning paradigm meet

Bias in machine learning models can have far-reaching and detrimental effects. With the increasing use of AI to automate or support decision-making in a wide variety of fields, there is a pressing need for bias assessment methods that take into account both the statistical detection and the social impact of bias in a context-sensitive way. This is why NGO Algorithm Audit has developed a comprehensive, two-pronged approach for addressing algorithmic bias. In this presentation, we elaborate on two aspects of assessing AI fairness: (1) our quantitative bias detection tool, and (2) our qualitative, deliberative assessment to interpret quantitative metrics. We discuss a case study – that has been selected as a finalist for Stanford’s AI Audit Competition 2023 – to show how the scalable and data-driven benefits of machine learning work in tandem with the normative and context-sensitive judgment of human experts, in order to determine fair AI in a concrete way.

Register now