By Greg Satell & Josh Sutton
Algorithms can determine what college we attend, whether we get hired, whether we qualify for a loan and even who goes to prison and for how long. Unlike human decisions, these mathematical models are rarely questioned. They just show up on somebody’s computer screen. In some cases, the errors of algorithms are obvious. What’s far more insidious and pervasive are the more subtle glitches that go unnoticed but have very real effects on people’s lives.
Once you get on the wrong side of an algorithm, your life becomes more difficult. Those facts get fed into new algorithms and your situation can degrade further. Each step in your descent is documented, measured and evaluated. It is imperative that we begin to take the problem of artificial intelligence (AI) bias seriously and take steps to mitigate its effects.
Bias in AI systems has two major sources: the data sets on which models are trained, and the design of the decision-making models themselves. With so many diverse sources of bias, we do not think it is realistic to believe we can eliminate it entirely, or even substantially. We suggest three practical steps leaders can take to mitigate the effects of bias.
First, AI systems must be subjected to vigorous human review. Second, just as banks are required by law to “know their customer,” engineers who build systems need to know their algorithms. And third, AI systems, and the data sources used to train them, need to be transparent and available for auditing. We wouldn’t find it acceptable for humans to be making decisions without any oversight, so there’s no reason we should just accept it when machines make decisions.
Perhaps most of all, we need to shift from a culture of automation to augmentation. Artificial intelligence works best not as some sort of magic box you use to replace humans and cut costs, but as a force multiplier that you use to create new value.
Greg Satell is an international keynote speaker, adviser and author. Josh Sutton is the CEO of Agorai.