spot_img
Tuesday, December 24, 2024
More
    spot_img
    HomeTechnologyWhat, if anything, is AI search good for?

    What, if anything, is AI search good for?

    -

    An intelligent man looks at an old Apple computer.

    Silly 1980’s computer worker looking intently at a vintage computer screen. Retro colored and stylized illustration with wood paneling in the background.

    It’s been a month since Google’s spectacular fool. Its new AI overview feature was said to be “Take the legwork out of the search,” provides easy-to-read answers to our questions based on multiple search results Instead, it told people eat stone And Glutinous cheese on pizza. You can ask Google which country in Africa starts with the letter “K” and Google None of them will tell. In fact, you may still get this wrong answer because AI search is a disaster.

    This spring looked like a turning point for AI research, thanks to a few big announcements from major players in the space. One is the Google AI overview update, and the other comes from Perplexity, an AI search startup that has already been labeled As a worthy alternative to Google. At the end of May, Perplexity launched a new feature called Pages that can create custom web pages full of information on a specific topic, like a smart friend who does your homework for you. then The distraction was caught stealing. For AI search to work well, it seems, it needs a little trickery.

    There is a lot of ill will and critics are piling up for AI search mistakes and missteps. A group of online publishers and creators Take it to Capitol Hill on Wednesday Google’s AI overview features and pulls content from independent creators to lobby lawmakers to look at other AI technologies. This comes just days after the Recording Industry Association of America (RIAA) and a group of major record labels Two AI companies sued That creates music from text for copyright infringement. And let’s not forget that Several newspapers, including the New York Times, sued OpenAI and Microsoft for copyright infringement for scraping their content for training the same AI models that power their search tools. (Vox Media, the company that owns this publication, meanwhile, has a licensing agreement with OpenAI that allows us to use its content for training by its models and ChatGPT. Our journalistic and editorial decisions remain independent.)

    Generative AI technology is expected to transform the way we search the web. At least that’s the line we’ve been fed since ChatGPT burst onto the scene in late 2022, and now each tech giant is pushing their own brand of AI technology: Microsoft has Copilot, Google has Gemini, Apple has Apple Intelligence, and so on. Can do more than just help you find things online, ditching Google Search still seems to be the holy grail of AI. Even OpenAI, the creator of ChatGPT, is known Building a search engine To compete directly with Google.

    But despite the very public efforts of many companies, AI search won’t make finding answers online any time soon, according to people I spoke to.

    It’s not just that AI search isn’t ready for primetime because of some flaws, these flaws are so deeply integrated into how AI search works that it’s now unclear whether it can be good enough to replace Google.

    “It’s a good addition, and sometimes it’s really great,” Chirag Shah, a professor of information science at the University of Washington, told me. “But I think we’re still going to need traditional search around.”

    Without going into all the pitfalls of AI search here, let’s highlight two that have been demonstrated with the recent Google and Perplexity kerfuffles. The Google pizza glue incident shows just how stubbornly generative AI’s hallucination problem is. Just a few days after Google launched its AI overview, some users noticed that if you ask Google how to keep the cheese from falling off the pizza, Google Will suggest adding some glue. This particular answer appears from Asa An old Reddit thread That, for some reason, Google’s AI idea was an authoritative source even though one quickly realized that Redditors were having fun eating glue. A few weeks later, The Verge’s Elizabeth Lopatto reported on that Google’s AI Overview feature was still recommending pizza joints. Google Its AI overview feature has been brought back In May after the viral failure, so it’s hard to access the AI ​​overview at all.

    The problem isn’t just that the large language models that power AI tools can hallucinate or generate information in certain situations. They also can’t tell good information from bad—at least not right now.

    “I don’t think we’ll ever be at a point where we can guarantee that there won’t be hallucinations,” said Yoon Kim, an assistant professor at MIT who studies large language models. “But I think a lot of progress has been made to reduce these hallucinations, and I think we’ll get to a point where they’ll be good enough to use.”

    Recent confusion plays out AI highlights a different problem with search: It accesses and republishes content it is not supposed to. Confusion, whose investors include Jeff Bezos and Nvidia, has made a name for itself by providing deep answers to search queries and showing its origins. You can ask it a question and it will return a conversational answer, complete with quotes from around the web, which you can refine by asking more questions.

    When Perplexity launched its Pages feature, however, it became clear that its AI had an uncanny ability to tear apart journalism. Confusion makes even pages His website looks like a news section. One such page published it Summaries of some Forbes exclusive, paywalled investigative reports Eric Schmidt’s drone project. Forbes Alleged plagiarism of Perplexity’s contentand wired Report later The confusion that is scraping content from websites has blocked crawlers doing such scraping. AI-powered search engines will even generate incorrect answers to queries based on URLs or metadata descriptions. (In an interview with Fast Company last week, Perplexity CEO Arvind Srinivas Some results are denied of Wired’s investigation and said, “I think there’s a fundamental misunderstanding of how it works.”)

    The reasons why AI-driven search sourcing stinks are both technical and simple, Shah explains. Technical explanation involved Some are called recovery-augmented generations (RAG), which acts a bit like a professor hiring a research assistant to find out more information about a particular topic if the professor’s personal library isn’t enough. RAG solves some of the problems with how the current generation of large language models produce content with the frequency of hallucinations, but it also creates a new problem: it cannot distinguish good from bad sources. As it stands, AI lacks good judgement.

    When you or I do a Google search, we know that the long list of blue links will include high-quality links, such as newspaper articles, and low-quality or unverified ones, such as old Reddit threads or SEO firm garbage. We can distinguish between good or bad in a split second, thanks to years Experience perfecting our own Googling skills.

    And then there’s some general knowledge that AI doesn’t have, like knowing whether it’s okay to eat rocks and gum.

    “AI-driven search doesn’t have that capability yet,” Shah said.

    None of this is to say that you should turn away the next time you see an AI overview. But instead of thinking of it as an easy way to get an answer, you should think of it as a starting point. Like Wikipedia. It’s hard to know how that answer ended up at the top of a Google search, so you might want to check the sources After all, you are smarter than AI.

    Source link

    Related articles

    Stay Connected

    0FansLike
    0FollowersFollow
    0FollowersFollow
    0SubscribersSubscribe
    google.com, pub-6220773807308986, DIRECT, f08c47fec0942fa0

    Latest posts