spot_img
Tuesday, December 24, 2024
More
    spot_img
    HomeTechnologyMisinformation is winning the war against misinformation

    Misinformation is winning the war against misinformation

    -

    A man looks at anti-vaccines in this illustrated photo "pure blood" Movement website and Facebook group in Los Angeles, January 20, 2023. – Vaccine skeptics reject transfusions for life-saving surgery, Facebook groups shut down inciting violence "tarnished" A global campaign to connect blood and unvaccinated donors — a surge in Covid-19 misinformation has spawned a so-called "pure blood" Movement (Photo by Chris Delmas/AFP) (Photo by Chris Delmas/AFP via Getty Images)

    Misinformation on the internet has never been worse. Or at least it is based on my analysis, vibes.

    People on TikTok are eating videos that are saying a bunch of wrong things about it The dangers of sunscreen, while the platform’s on-app shop pushed obscure books containing bogus cancer cures up the Amazon bestseller list. Meanwhile, the Republican nominee for president is fresh Looks like a successful push Unbiased efforts to counter election misinformation. Also, Google’s AI overviews search results to humans Spread the glue on the pizza.

    But all this is imaginary. Can I prove my ideas with data? Sadly, no. Information I – or more accurately, researchers with real expertise in the matter – need to do is locked behind the opaque doors of the companies that run the platforms and services that host the internet’s worst money. Assessing the reach of misinformation is, at present, a difficult and indirect process with imperfect results.

    For my final newsletter contribution, I wanted to find a way to assess the state of misinformation online. As I’ve covered this topic time and time again in the past, one question keeps popping up in my head: Are companies like Google, Meta, and TikTok even? care How about dealing with this problem meaningfully?

    The answer to this question is also incomplete. But there are some things that can be educated guesses.

    Ways to measure misinformation are disappearing

    One of the most important things a journalist can do when writing about the spread of bad information online is to find a way to measure its reach. For example, there is a huge difference between a YouTube video with 1,000 views and one with 16 million views. But recently, some of the key metrics used to give context to “viral” misinformation have disappeared from public view.

    tick tock Disabled view count for popular hashtags Earlier this year, it’s shifting to just showing numbers instead post Created on TikTok using hashtags. Meta is closing CrowdTangle is a once-in-a-lifetime tool for researchers and journalists to closely examine how information spreads across social media platforms in August. It is just a few months before the 2024 elections. And Elon Musk Decided to make “Preferences” personal On the platform, a decision that, to be fair, is bad for accountability but may have some benefits for ordinary users of X.

    Among all these and Decreasing access to platform APIsResearchers are limited in how much they can actually track or talk to what’s going on

    “How do we track things over time? Apart from relying on platform noise,” he said Ananya Senis an assistant professor of information technology and management at Carnegie Mellon University, whose recent research shows how companies inadvertently fund misinformation-laden sites when they use large ad technology platforms.

    Disappearing metrics are basically the opposite of what many experts recommend on manipulated data. Transparency and disclosure are “key” elements of reform efforts like the Digital Services Act in the EU, said Yasin JarniteMachine Learning and Society Hugging Face, an open-source data science and machine learning platform leads the way.

    “We’ve seen that people use [generative AI] “Services for information about elections can have misleading output,” Jarnite added, “so it’s especially important to accurately represent the reliability of these services and avoid overhyping.”

    It’s generally better for the information ecosystem when people learn more about what they’re using and how it works. And while some aspects of this fall under media literacy and information hygiene efforts, part of it has to come from the platforms and their boosters. Hyping an AI chatbot as a next-generation search tool sets expectations that the service itself doesn’t meet.

    Platforms don’t have much incentive to care

    Platforms aren’t just amplifying bad information, They are making money from it. From TikTok Shop purchases to ad sales, if these companies take meaningful, systematic steps to change how misinformation spreads on their platforms, they could be working against their business interests.

    Social media platforms are designed to show you things you want to engage with and share AI chatbots are designed to give the illusion of knowledge and research. But none of these models are great for assessing authenticity, and doing so often requires limiting the scope of a platform that works as intended. Slowing down or narrowing how a platform like this works means less engagement, which means no growth, which means less money.

    “I personally can’t imagine that they would be as aggressively interested in tackling this as the rest of us,” said Evan Thornburg, a bioethicist who posted on TikTok. @gaygtownbae. “What they are able to monetize is our attention, our interest and our purchasing power. And why would they narrow it down to a narrow scope?

    Many platforms started disinformation efforts after the 2016 US election and again at the start of the Covid pandemic. But since then, there has been kind of a pullback. meta lay off Employees of parties involved in content control in 2023, and rolled Bring back its covid-era rules. Maybe they’re sick of being responsible for this stuff at this point. Or, they see an opportunity to get ahead of it as technology changes.

    So what do they care?

    Again, it’s hard to quantify the efforts of the big platforms to prevent misinformation, which again makes me lean towards the informed vibe. To me, it seems like the major platforms are lagging behind prioritizing the fight against misinformation and disinformation, and there’s a general kind of fatigue in this regard more broadly. That doesn’t mean no one is doing anything.

    Prebanking, which significantly involve rumors and false rumours, and are highly promising before they gain traction, especially when selective disinformation is applied. Crowdsourced fact-checking is also an interesting approach. And to the platform’s credit, they continue to update their rules as new issues arise.

    Here’s a way I have some sympathy for platforms. It’s an exhausting thing, and it’s hard to be repeatedly told you’re not doing enough. But going backwards and forwards doesn’t stop bad information finding an audience over and over again. While these companies assess how careful they are to moderate and address their platform’s ability to spread falsehoods, the people targeted by those falsehoods continue to suffer.



    Source link

    Related articles

    Stay Connected

    0FansLike
    0FollowersFollow
    0FollowersFollow
    0SubscribersSubscribe
    google.com, pub-6220773807308986, DIRECT, f08c47fec0942fa0

    Latest posts