By: Rey Anthony Ostria | Sep 24, 2023

In “Minari,” the 2020 A24 film, my favorite line is from Soonja, who says that some things are more dangerous when you don’t see them. Although Soonja wasn’t really talking about technology in that scene, the valuable lesson from that short exchange about a snake with a potent venom that could kill large animals and humans can be used when referring to how we should approach innovations.

Talks about artificial intelligence inevitably brings about a spectrum of fear and optimism. Those who see its potential harm understandably and rightly touch upon the loss of creativity, the dangers pushing more people into the margins (AI uses data fed to it by people who also have biases), its effects on truth-telling, and all the jobs that may soon be extinct.

Just this week, GMA introduced Maia and Marco, two image-generated AI sportscasters that will be powered by text-to-speech technology. This use of AI is not just shallow, but also disrespectful and dangerous to the profession. GMA does not value the work of reporters enough that they are willing to replace them with avatars. If the networks wants to deliver quality reports, it must continue hiring proficient reporters.

AI is already in use in other newsrooms, but not in the way GMA thought they should. It’s used in suggesting better headlines, spotting stories elsewhere on the internet, using data from past games or events to suggest patterns or deviations, and in creating automatic paywalls if the tech deems that someone is more likely to pay for the content anyway.

These ways of incorporating AI technology in news production serve newsrooms and reporters, not replace them. GMA could have employed AI to give their courtside reporters automated and real-time data analysis to aid their reporting. These analyses could make predictions based on the teams’ or the players’ game histories so they can formulate better questions. For example, if the AI predicts that the teams’ defenses are no match against other teams, the reporter can ask them questions about how they trained in terms of guarding.

AI can also be used as a public sphere for a better and more healthy exchange of ideas. A public sphere is a place or platform where anyone is free to join conversations, critiques, and sharing of ideas. At the moment, we can look at the social media and its affordances as a public sphere, but it has its limitations.

Social media excludes those who have irked these platforms’ owners. Someone may either be de-platformed through blocking, banning, and shadow-banning. Some may also choose to stay out of these platforms in fear of surveillance from repressive state apparatuses, especially if they know or even just suspect that its ownership has ties with the government. A clear example of this is when Elon Musk suspended the accounts of journalists who were critical of his takeover of Twitter (now called X) and the account of a citizen journalist who used publicly available data to publish the billionaire’s jet’s whereabouts.

A better alternative to social media is an open-source platform that runs on artificial intelligence. Being open-source will allow anyone to build on that technology (it is not copyrighted or trademarked) and anyone is free to interact with that technology without the fear of being de-platformed. It also benefits the public because they know how the codes within these platforms operate, very much unlike how secretive social media platforms become when it comes to using our data during elections.

A social media that runs on AI may help reporters look for news reports and sources that will benefit the news-consuming public (as opposed to a social media site that feeds a reported information based on algorithm that benefits the platform owners and the advertisers only); will help the journalists, the sources, and the audience to interact better and without impasse (AI technologies can be taught to filter out insensitive comments and to surface underrepresented voices without being blocked by anyone); and warn audiences if they are being exposed to disinformation and misinformation or any form of simulacra (it is not necessary that false information are outright banned; however, it is important to highlight what is real). In other words, AI technologies can be used to counter consumerism, censorship, and propaganda.

AI can also be used to help in fact-checking (suggesting archival documents and doing image forensics), in helping investigative journalists detect inconsistencies in data, in producing visualizations of data, in translating quotes (although this should be verified by real experts), and in analyzing the impact of the news on social media and other medium. There are thousands of ways to use AI that won’t be detrimental to the journalism profession.

After a mini lecture with educators in high school and college about the convergence of AI and education, I was approached by a researcher who was planning to create an AI tool that will translate Filipino sign language to text. My only advice to him was to have a detailed consultation with members of the PWD community so that he will understand their thoughts on its potential benefits and potential dangers. Similarly, each newsroom planning to fuse journalism with artificial intelligence—it doesn’t matter whether it’s artificial general, narrow, or super intelligence—must first consult people who will use it and people who will supposedly benefit from it. AI is here and it is here to stay. There is no point throwing rocks at it at this point. What we must do is welcome it, embrace it, and study it.

If artificial intelligence is feared rather than weaponized by those who care for the truth, there is a high chance that its effects will be greater than the effects of social media on democracies everywhere. Remember in the mid- to late-2000s and early 2010s when we used to fear that social media like Friendster, Multiply, MySpace, Facebook, and Twitter would destroy communication? That family members will not talk during gatherings? That dating will never be the same again? When we were scared of all these then, it didn’t even cross our minds that people would wield the data extracted from all of us to destroy people’s trust on institutions.

If journalism still thinks this is just a fad and that it will die a natural death before it becomes a household word, artificial intelligence will become a hidden and dangerous water moccasin snake.

Image by Tara Winstead | Pexels






Facebook Comments Box