Auditing algorithms for bias

By Rumman Chowdhury & Narendra Mulani

In 1971 the philosopher John Rawls proposed a thought experiment to understand the idea of fairness: the “veil of ignorance.” What if, he asked, we could erase our recollections so we had no memory of who we were—our race, our income level, anything that may influence our opinion? Can artificial intelligence provide the veil of ignorance that would lead us to objective and ideal outcomes?

The field of AI ethics draws an interdisciplinary group of lawyers, philosophers, social scientists, programmers and others. Influenced by this community, Accenture Applied Intelligence has developed a fairness tool to understand and address bias in both the data and the algorithmic models that are at the core of AI systems. (An early prototype of the fairness tool was developed at a data study group at the Alan Turing Institute. Accenture thanks the institute and the participating academics for their role.)

In its current form, the fairness evaluation tool works for classification models, which are used, for example, to determine whether or not to grant a loan to an applicant. Classification models group people or items by similar characteristics. The tool helps a user determine whether this grouping occurs in an unfair manner, and provides methods of correction.

There are three steps to the tool:

The first part examines the data for the hidden influence of user-defined “sensitive” variables on other variables. The tool identifies and quantifies what impact each predictor variable has on the model’s output.

The second part of the tool investigates the distribution of model errors for the different classes of a sensitive variable. Our tool applies statistical distortion to fix the error term; that is, the error term becomes more homogeneous across the different groups. The degree of repair is determined by the user.

Finally, the tool examines the false positive rate across different groups and enforces a user-determined equal rate of false positives across all groups. False positives are one particular form of model error: instances where the model outcome said “yes” when the answer should have been “no.”

In correcting for fairness, there may be a decline in the model’s accuracy, and the tool illustrates any change in accuracy that may result. Since the balance between accuracy and fairness is dependent on context, we rely on the user to determine the trade-off. Depending on the context of the tool, it may be a higher priority to ensure equitable outcomes than to optimize accuracy.

Our tool does not simply dictate what is fair. Rather, it assesses and corrects bias within the parameters set by its users, who ultimately need to define sensitive variables, error terms and false positive rates.

Rumman Chowdhury leads Accenture’s Responsible AI practice. Narendra Mulani is the lead for Accenture Applied Intelligence.

Total
0
Shares

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Previous Article

With SC upholding it, K to 12 to pave way for basic education, social devt–Briones

Next Article
Stop Corruption Concept

Women act more ethically than men when representing themselves—but not when representing others

Related Posts