Regulating artificial intelligence (AI) is the “wrong goalpost” and that the primary objective in its rollout must be safety and trust in the system, according to the data science and AI expert of a major Philippine bank.
Dr. David Hardoon, UnionBank Senior Adviser for Data & Artificial Intelligence, told participants at the EFMA Sustainability and Regulation Community Best Practice Forum there are safety nets to mitigate the risks associated with AI.
“Data, and to an extent, the AI as a mechanism and tool which manifests possibilities out of data, is an onion,” Hardoon said.
“And what you find with this onion is that it’s not just about data. It’s not just about application. It’s not just about consumer engagement. It’s also about history. It’s also about our understanding of our own current behavior. It’s essentially opening up an immense view that potentially, previously, we were completely unaware of.”
To simplify in a way the use of AI, Hardoon said that AI may be broken down to at least three segments. He said the first is data, which may be historically good or bad as it shows the issues or errors that happened in the past or may happen moving forward.
Hardoon said the second is AI itself, the precise approach of extracting information from the available data. The operationalization is the last phase which reveals the information.
“When thinking about operationalizing AI governance, it is imperative to have a broad appreciation of the risk that comes from your available historical data—the potential disadvantages, or errors, or issues, or elements that may result in lack of trust that may come from that.”
He emphasized that trust within an organization remains the most important thing when operationalizing AI. Hardoon compared this to how individuals trust their closest friends and family members.
“Our trust in them isn’t that they always are correct or even always tell the truth, but it is in their ability to say ‘I’m sorry, I made a mistake. Allow me to correct myself.’ That is the exact same principle which we need to hold ourselves accountable for when we’re applying new technology, in making sure we’re putting in place safety nets.”
With the emergence of new technologies such as the Internet of Things and AI as well, Hardoon said it is still important to include people as part of the “peeling” process in their operation.
“Not that humans may be any better, but we trust humans so far a bit more right now, until we get to that stage of realizing it’s good. Or perhaps in certain areas, we must simply accept that AI should never play a role, because we want to have the ability of continuous intervention in terms of outcome.”