Trustworthy and ethical AI systems–possible?

I hate to say this: Artificial intelligence (AI) as a technology is maturing.

Far from the stuff of science fiction, AI has moved from the exclusive regimes of theoretical mathematics and advanced hardware to an everyday aspect of life. Over the last several years of exponentially accelerating development and proliferation, our needs and requirements for mature AI systems have begun to crystallize.

Trust is not an internal quality of an AI system like accuracy, or even fairness. Instead, it’s a characteristic of the human-machine relationship formed with an AI system. No AI system can come off the shelf with trust baked in. Instead, trust needs to be established between an AI user and the system, which must be dominated by humans!

The highest bar for AI trust can be summed up in the following question: What would it take for you to trust an AI system with your life?

Fostering trust in AI systems is the great obstacle to bringing into reality transformative AI technologies like autonomous vehicles or the large-scale integration of machine intelligence into medicine. To neglect the need for AI trust is also to downplay the influence of the AI systems already embedded in our everyday financial and industrial processes, along with the increasing interweaving of our socioeconomic health and algorithmic decision-making.

AI is far from the first technology required to meet such a high bar. The path to the responsible use of AI has been paved by industries as diverse as aviation, nuclear power, and biomedicine. What we’ve learned from their approaches to accountability, risk, and benefit forms the foundation of a framework for trusted AI.

The challenge now is to translate those guiding principles and aspirations into implementation, and make it accessible, reproducible, and achievable for all who engage with the design and use of AI systems. This is a tall order but far from an insurmountable obstacle.

What do do mean by ”Dimensions of Trust?”

We trust an AI system in three main categories:

1.) Trust in the performance of your AI/machine learning model.

2.) Trust in the operations of your AI system.

3.) Trust in the ethics of your workflow, both to design the AI system and how it is used to inform your business process.

It’s worth acknowledging that trust in an AI system varies from user to user. For a consumer-facing application, the requirements of trust for the business department, who created and owns the AI app, are very different from those of the consumer who interacts with it potentially on their own home devices. Both sets of stakeholders need to know that they can trust their AI system, but what trust signals are needed and available for each will be quite different. Trust signals refer to the indicators you can seek out in order to assess the quality of a given AI system along each of these dimensions.

But trust signals are not unique to AI—it’s something that we all use to evaluate even human-to-human connections. Think about what kinds of trust signals you intentionally seek out when meeting a new business partner. It will vary person to person, but we all recognize that eye contact is important, especially as a sign that someone is paying attention to you while you speak. For some people, a firm handshake is meaningful, and for others, punctuality is vital; a minute late is a sign of thoughtlessness or disrespect. Reflective language is a powerful way to signal that you are listening. To complicate things, think about how trust signals change when evaluating a new acquaintance as a potential friend compared to a business partner.

How does this relate to an AI system? Depending on its use, an AI system might be comparable to any of these human relationships. An AI that is embedded in your personal banking is one that you need to be able to trust like a business advisor. An AI system that is powering the recommendation algorithm for your streaming television service needs to be trustworthy like a friend who shares your genre interests and knows your taste. A diagnostic algorithm must meet the credentials and criteria you would ask of a medical specialist in the field, and be as open and transparent to your questions, doubts, and concerns.

The trust signals available from an AI system are not eye contact or a diploma on the wall, but they serve the same need. Particular metrics, visualizations, certifications, and tools can enable you to evaluate your system and prove to yourself that it is trustworthy.

In conclusion, yes, AI is part of our future, but it is essential to understand the need for its ethical deployment. It is to be understood also that humans need to manage AI and see to it that AI does not drive the performance of the organization in the wrong direction.

As mentioned above, responsible AI benefits brand differentiation and an upper hand in employee recruiting and retention. To deal with the potential ethical implications of AI, business leaders need to focus on the minimum requirements expected by the Integrity Initiative.

I look forward to your views on this topic; contact me at hjschumacher59@gmail.com

Total
14
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Posts
Total
14
Share