When ChatGPT, a large language model artificial intelligence (AI) program, was released late November last year, it caught the world’s attention, and I was experimenting with its capabilities by the second week of December. By the time we had to submit our course syllabi for the second semester in January 2023, I did not change my class requirements in an economics course I would be teaching.
These were graded essays and discussion boards comprising a major portion of the final grade. I just made sure that essays and problem sets would always be accompanied by data sets for direct interpretation and analysis. I really was not worried about AI being used in my class requirements.
Why should we worry about AI? Another curious question is: Who are worrying about AI? The answer to the second question is only those who understand AI, including those who have seen its potential to replace people in jobs while all the rest are just speculative, a fear of the unknown. The majority do not know nor care about AI until it affects them positively or negatively. But do we actually know how AI affects us because most certainly we do not even realize how it operates?
The past AI’s most direct and widespread engagement with people is in social media, subtly directing us users to contents we are most likely interested, either affirming our biases or satisfying our curiosity. This created echo chambers that reinforce affinities to either right or wrong beliefs and to everything that people identify with. Never have we seen in history that social and political lines are so clearly defined in populations around the world, creating huge problems in consensus building for democracies to work. Experts point to this first wide engagement between AI and humanity in social media, and the verdict is that AI won. Guess then who will win in the next stage considering a much powerful and intelligent AI?
It is not hard to imagine that the next widespread engagement with AI is on mainstreaming its use in our everyday work. It will be revolutionary in all aspects, but it is difficult to imagine how exactly it will pan out in education, industries, finance, medicine, and scientific research, among others. From the developed countries it will spread out throughout the world. How fast it will affect us, remains to be seen.
It is not difficult to understand the main ideas of a neural network model being the platform of ChatGPT and other large AI models. It is a vast tangle of networks of input and output prediction system that makes hundreds and thousands of recursive iterations until it fits into the data it is trained. Once the neural network system has been programmed and designed, the key is the training data. If it is trained with hundreds of images of a dog, and then presented with a blurred image of a dog, it can predict as well as construct a clear image through a process called backward propagation. This is an optimization process that reduces the error of prediction in several iterations until it fits into, for example, a clear image of a dog modeled from trained data.
The neural network mimics how the brain works, which brings us to question if AI does also think like a human. Majority of the experts, including those who build large AI models, do not think so. It is just a perfect system of statistical predictions based on past “knowledge” (trained data) which is hardly a human thinking process. In fact, we do the thinking coded in certain forms of language, such as how we processed information and formed the conventional knowledge of what is a dog.
The neural network model reverses this process in the most efficient way of learning through backward propagation. Large AI models benefit from the vast human knowledge accumulated through the past millennia. This led one leading AI scientist to caution on AI development. The future of what a super-intelligent AI can do being unparalleled is something that needs to be understood first.
As large AI models learn from the accumulated human knowledge, will they form values that are also aligned with human values? This is what experts call the alignment problem. It is not hard to figure out after some time using ChatGPT that it is liberal or biased to liberal ideas. Conservatives were the first to voice this out. So, what makes a universal common value that AI should possess when we, human populations, have conflicting values that sometimes explode into violent conflicts? This stokes the fear that the super-intelligent AI can subtly manipulate humans into more devastating conflicts thriving on our inherent human weaknesses.
Mr. Joselito T. Sescon is Assistant Professor at the Department of Economics of Ateneo de Manila University.