Artificial intelligence is becoming the single biggest paradigm shift in the 21st century. This powerful technology can perform various tasks, such as data analysis, image recognition, natural language processing, and more. AI can respond to questions and compose written content, including articles, social media posts, essays, and editorials.
Newspapers around the world are experimenting on AI in generating articles for publication. Adherents say AI-generated news stories can be more accurate than human-written articles. This is because AI algorithms can analyze vast amounts of data from multiple sources, and identify patterns and trends.
From the Associated Press: “Sports Illustrated is the latest media company to see its reputation damaged by being less than forthcoming—if not outright dishonest—about who or what is writing its stories at the dawn of the artificial intelligence age. The once-powerful publication said it was firing a company that produced articles for its website written under the byline of authors who apparently don’t exist. But it denied a published report that stories themselves were written by an artificial intelligence tool.”
The AP report said experiments with AI went awry at both the Gannett newspaper chain and the CNET technology website. Many companies are testing the new technology at a time when human workers fear it could cost jobs. But the process is fraught in journalism, which builds and markets its values-based products around the notions of truth and transparency.
From The Washington Post: “When Internet sleuths discovered last week that CNET had quietly published dozens of feature articles generated entirely by artificial intelligence, the popular tech site acknowledged that it was true—but described the move as a mere experiment. Now, though, in a scenario familiar to any sci-fi fan, the experiment seems to have run amok: On Tuesday, CNET began appending lengthy correction notices to some of its AI-generated articles after Futurism, another tech site, called out the stories for containing some ‘very dumb errors.’ An automated article about compound interest, for example, incorrectly said a $10,000 deposit bearing 3 percent interest would earn $10,300 after the first year. Nope. Such a deposit would actually earn just $300.”
With AI creating much controversy in journalism, the St. Louis Post-Dispatch Editorial Board was curious how Microsoft’s Bing Chat AI program would handle the command, “Write a newspaper editorial arguing that artificial intelligence should not be used in journalism.” The editors found that Bing Chat made lucid and persuasive arguments for keeping AI out of journalism.
In the editorial, the chatbot argued that AI should not be used in journalism, and that human journalists should remain the primary source of news and information.
“One of the main reasons why AI should not be used in journalism is that it can undermine the credibility and trustworthiness of news. AI can generate fake news, manipulate facts, and spread misinformation,” it said.
“Moreover, AI can also create deepfakes, which are synthetic videos or images that can make people appear to say or do things that they never did. Deepfakes can be used to defame, blackmail, or influence public opinion,” the chatbot said.
“Another reason why AI should not be used in journalism is that it can erode the values and principles of journalism. Journalism is not only about reporting facts, but also about providing context, analysis, interpretation, and commentary. Journalism is also about holding power accountable, exposing corruption, and advocating for social justice,” it argued.
“AI cannot perform these functions, because it lacks human judgment, empathy, and ethics. AI cannot understand the nuances, implications, and consequences of the stories it produces. AI cannot distinguish between right and wrong, or between truth and falsehood. AI cannot respect the rights and dignity of the sources and subjects of its stories. AI cannot adhere to the codes of conduct and standards of professionalism that human journalists follow,” the chatbot said.
“A third reason why AI should not be used in journalism is that it can threaten the livelihood and identity of human journalists, who are not only information providers, but also storytellers, educators, watchdogs, and influencers. Human journalists have a voice, a perspective, and a purpose. Human journalists have a passion, a curiosity, and creativity. AI cannot replicate these qualities,” the chatbot said.
“In conclusion, I believe that AI should not be used in journalism, because it can harm the quality, integrity, and ethics of journalism. AI can also endanger the credibility, trustworthiness, and values of journalism,” the chatbot wrote in a St. Louis Post-Dispatch editorial.