Working in journalism right now is strange and often a little scary. Misinformation and disinformation are inseparable from online reality, as the network’s ever-growing web of nonsense has become a “desired reality” that competes with factual information for your attention and trust. AI-generated content matches Successfully masquerading as a real news site. And in some of those real news organizations (for example, My former employer) has seen a tiresome trend of internal unrest, loss of confidence in leadership and waves of layoffs.
The impact of this change is now coming into focus. Dr. Pew-night research initiative on Wednesday A new report has been released How Americans Get Their News Online It’s an interesting snapshot, not just of people watching news on TikTok, Instagram, Facebook, or X – but also who they trust to deliver it to them.
TikTok users who say they regularly receive news on the platform are more likely to receive news from influencers there, such as media outlets or private journalists. But they’re more likely to get news on TikTok from “other people they don’t know personally.”
And while the majority of users on all four platforms say they regularly view some form of news-related content, only a tiny fraction of them log on to social media to consume it. X, formerly Twitter, is now the only platform where most users say they check their feeds for news, with either a major (25 percent) or minor (40 percent) reason for using it. In contrast, only 15 percent of TikTok users said news was a major reason they would scroll down the page for you.
Pew Research dropped by when I was confused about how to answer a big question: How is generative AI going to change media? And I think the new data highlights just how complicated the answer is.
There are plenty of ways that generative AI is already changing journalism and the larger information ecosystem. But AI is just one part of an interconnected series of incentives and forces that are reshaping how people get information and what they do with it. Some of the problems of journalism as an art are at the moment More or less own goals No amount of worrying about AI or worrying about subscription numbers will do.
However, here are some things to watch for:
AI can make bad information more valid
Fact-checking an endless river of information and commentary is difficult, and rumors spread much faster than verification, especially during a rapidly developing crisis. People turn to the Internet in those moments for information, understanding, and hints on how to help. And that frantic, charged search for the latest update has long been easy to manipulate for bad actors who know how. Generative AI can make this even easier.
Tools like ChatGPT can mimic the voice of a news article, and the technology has a history of being “hallucinating.” Article citation and reference elements that do not exist. Now, people can use an AI-powered chatbot that hides bad information from all the trappings of verified information.
Julia Angwin, founder of Proof News and a longtime information and technology journalist, recently said, “What we’re not prepared for is that there are basically these machines that can generate plausible-sounding text that has nothing to do with the truth.” Journalist resource.
“For a profession that’s meant to be factual, all of a sudden you’re competing in the marketplace — basically, the marketplace of information — with all these words that sound, feel, and have nothing to do with accuracy,” he noted.
The flood of rational-sounding texts has implications beyond journalism. Even for those quite adept at determining whether an email or article is credible, AI-generated texts can clutter radars. Phishing emails and reference books – not to mention photography and video – are already fooling people with AI-generated text.
AI doesn’t understand jokes
It didn’t take long for Google’s AI overview tool, which generates automated responses to search queries on the results page, to start producing some pretty questionable results.
Famously, Google’s AI overview told searchers Put some glue on the pizza To make cheese sticks better, draw from a Funny answers on reddit. Others found the overview answer instructing searchers to change their blinker fluid, referring to a joke popular on car maintenance forums (no blinker fluid). Another overview encourages eating northern rocks, apparently because An onion article. These glitches are fun, but AI overviews don’t just read for fun Reddit posts.
Google’s response to the overview issue states that the tool’s inability to parse sarcasm from serious answers is partly due to this “Data Void.” This is when a particular search term or query doesn’t have a lot of serious or informed content written about it online, meaning the top results for a related query will likely be less reliable. (I’m familiar with data gaps from writing about health misinformation, where poor outcomes are a real problem.) One solution to data gaps is to have more reliable content about the topic at hand, created and vetted by experts, reporters. and other individuals and organizations who can provide informed and factual information. But as Google focuses more on internal results rather than external sources, the company is also removing some of the incentives for people to create that content in the first place.
Why should a non-journalist care?
I worry about this stuff because I’m a reporter who has covered information weaponization online for years. This means two things: I know a lot about the spread and consequences of misinformation and rumours, and I make a living doing journalism and intend to continue doing so. So of course, you can say, I care. AI may be coming for my job!
I’m a little skeptical of the idea that generative AI, a tool that doesn’t do real research and doesn’t really have a good way to verify the data it produces, will be able to replace a practice, at best, a data collection method that does the original work. And depends on verifying the results. When they are used properly and that use is disclosed to readers, I don’t think These tools are useless for researchers and journalists. In the right hands, generative AI is just a tool. What generative AI can do, in the hands of bad actors and grifters — or when deployed to maximize profits regardless of the informational pollution it creates — is fill your feed with junky and inaccurate content that sounds like news but isn’t. . While AI-generated nonsense may be a threat to the media industry, journalists like me are not its targets. It’s you.