While companies are making progress deploying artificial intelligence (AI), more than half of enterprises overestimate their levels of maturity in deploying responsible AI models, according to a report from BCG GAMMA, a research group within Boston Consulting Group. The responsible AI framework evaluates the technology’s potential effects on safety, privacy and society at large.
Businesses often struggle with three areas of responsible AI maturity:
• fairness and equity,
• social and environmental impact mitigation, and
• whether a system is able to safeguard human well-being and preserve human authority.
AI technology underpins critical company systems. It’s becoming the backbone of services across the enterprise, from consumer-facing applications to tools that help coordinate manufacturing or logistics. But leaders have long struggled with the ethical dimensions of the technology, including privacy and bias.
Businesses are working to deploy AI but for many, the ethical dimensions of AI are still out of grasp. Companies find security or privacy concerns represent an obstacle in AI implementation, according to Gartner data. There are also challenges integrating AI solutions with existing architecture and the data volume or complexity.
However, benefits of responsible AI include brand differentiation and an upper hand in employee recruiting and retention, as well as a culture of responsible innovation.
Customer expectations, risk mitigation, regulatory compliance and social responsibility also need to drive business leaders to seek a responsible AI deployment.
To deal with the potential ethical implications of AI, business leaders need to focus on the traceability and explainability of AI models, minimum requirements expected by the Integrity Initiative and minimum compliance enforced by the National Privacy Commission.
The ethics of big data and AI have become hot topics, but the foundations of ethical technology stretch beyond the emerging trends.
While all businesses are required to follow certain regulatory and legal requirements, what it means to be ethical varies from company to company—and even from person to person.
Ethics can come up in unusual ways, such as blowback for not supporting social issues or labor practices. “Tech is not immune” to the ethical issues faced by all of the supply chain, such as whether to support a vendor known for union busting or a vendor contracting with controversial organizations. Amazon and Walmart have started to review the extent of their AI implementation.
IT business decisions bring a whole new set of ethical challenges. Use of data and AI, for example, can present privacy and discrimination issues on top of traditional supply chain ethics. And it falls on IT leaders to account for the ethical dilemmas.
The ethical issues fit into a business’s bottom line, too. Strong ethics sustains internal and external trust, which is good for competitive advantage. This notion of ethics is becoming much more visible to stakeholders across the board and they are using that as a measure of trust, both internally and externally.
But there’s no common definition for what ethical technology looks like and the conversation is ongoing. Instead, CIOs and other members of IT leadership are responsible for figuring out what tech ethics mean for their organizations in the near and long term.
If an organization doesn’t do its ethical due diligence, customers will catch on and trust will be diminished!
In conclusion, yes, AI is part of our future, but it is essential to understand the need for its ethical deployment. It is to be understood also that humans need to manage AI and see to it that AI does not drive the performance of the organization in the wrong direction.
As mentioned above, responsible AI benefits brand differentiation and an upper hand in employee recruiting and retention. To deal with the potential ethical implications of AI, business leaders need to focus on the minimum requirements expected by the Integrity Initiative and minimum compliance enforced by the National Privacy Commission.
I look forward to your views on this topic; contact me at hjschumacher59@gmail.com