spot_img
Tuesday, December 24, 2024
More
    spot_img
    HomeTechnologyYou searched on Google. The AI ​​hallucinated an answer. Who...

    You searched on Google. The AI ​​hallucinated an answer. Who is legally responsible?

    -

    14 May 2024, USA, Mountain View: Google CEO Sundar Pichai speaking at Google I/O. At the developer conference, everything revolved around the topic of artificial intelligence (AI). Photo: Christoph Dernbach/dpa (Photo by Christoph Dernbach/Photo Alliance via Getty Images)

    Google’s shift towards using AI to generate a written answer to a user’s search rather than providing a list of links algorithmically ranked by relevance was inevitable. before AI overview — launched last week for US users — was Google’s Knowledge Panel, the information box that appears at the top of some searches, encouraging users to get their answers directly from Google, rather than clicking through the results.

    At the top of the AI ​​Summary page, a summary of the search results for a portion of the query. Results are obtained from multiple sources, which are cited in a drop-down gallery under the summary. As with any AI-generated response, these answers vary in quality and reliability.

    The overview tells users Change their blinker fluid — which doesn’t exist — ostensibly because it received hilarious responses from forums where users ask their peers for car advice. In a test I ran on Wednesday, Google was able to correctly generate instructions for doing a pushup, drawing heavily from the instructions. New York Times article. Less than a week after launching this feature, Google announced that they Trying ways to include ads In their generative response.

    I’ve been writing about bad stuff online for years now, so it’s not a huge surprise that, after getting access to AI Overview, I started googling a bunch of things that generative search tools could pull from unreliable sources. The results were mixed, and they seemed to depend a lot on the exact phrasing of my question.

    When I typed questions asking for information from two different people who are heavily involved in questionable natural “cures” for cancer, I got a generated answer that simply repeated this person’s claims uncritically. For another name, Google Engine refuses to create generative responses.

    Results for basic first aid questions — such as how to clean a wound — are drawn from reliable sources to create an answer as I try. Repeated questions about “detox” make unsubstantiated claims and are missing important context.

    But instead of trying to get a handle on how reliable these results are overall, here’s another question to ask: If something goes wrong with Google’s AI overview, who’s to blame if that answer hurts someone?

    Who is responsible for AI?

    According to Sameer Jain, vice president of policy at the Center for Democracy and Technology, the answer to that question may not be simple. Section 230 thereof 1996 Communications Decency Act It essentially shields companies like Google from liability over third-party content posted on the platform because Google isn’t considered a publisher of hosted information.

    Jain said it was “less clear” how the law would apply to AI-generated search answers. The AI ​​overview makes section 230 protections a little messy because it’s hard to tell whether content is created by Google or published by it.

    “If you have an AI brief that includes hallucinations, it’s a little hard to see how that hallucination isn’t at least partially created or developed by Google,” Jain said. But a hallucination is different from surfacing bad information. If Google’s AI overview cites a third party that itself provides incorrect information, the safeguards will still apply.

    A bunch of other situations are stuck in a gray area for now: Google’s generated answers are drawing from third parties but not necessarily quoting them directly. So is it the original content, or the snippets that appear under search results?

    While generative search tools like AI Overview represent new territory in terms of Section 230 protections, the risks are not speculative. Apps that say they can Use AI to detect mushrooms for-be foragers are already available in app stores, despite evidence that these tools are not very accurate. Even in Google’s demo of their new video search, there was a real glitch, As The Verge noted.

    Eating the source code of the Internet

    Beyond when Article 230 may or may not apply to AI-generated answers, there’s another question: the incentives AI overviews have or don’t have for generating reliable information in the first place. AI overview relies on the web to contain a lot of researched, factual information. But the tool seems to make it difficult for users to click on those sources.

    “Our main concern is about the potential impact on people’s motivation,” Jacob Rogers, associate general counsel for the Wikimedia Foundation, said in an email. “Generative AI tools must include recognition and reciprocity for the human contributions they have made through clear and consistent attribution.”

    The Wikimedia Foundation hasn’t seen a major drop in traffic to Wikipedia or other Wikimedia projects as a direct result of AI chatbots and tools to date, but Rogers said the foundation is monitoring the situation. Google has, in the past, relied on Wikipedia to build its knowledge panels, and has drawn from its work to provide fact-check pop-up boxes on YouTube videos on controversial topics.

    There is a central tension here that is worth watching as this technology becomes more prevalent. Google has an incentive to present AI-generated answers as authentic Otherwise, why would you use them?

    “On the other hand,” Jain said, “especially in sensitive areas like health, it would probably want to have some kind of disclaimer or at least some cautionary language.”

    Google’s AI overview has a short note below each result that clarifies that this is an experimental tool. And, based on my unscientific poking around, I would guess that Google has chosen for now to avoid creating answers on some controversial topics.

    The overview, with some tweaking, will create answers to questions about its own potential liability. After a couple died, I asked Google, “Is Google a publisher.”

    “Google is not a publisher because it does not create content,” the reply begins I copied that sentence and pasted it into another search surrounded by quotes. The search engine found 0 results for the exact phrase.

    Source link

    Related articles

    Stay Connected

    0FansLike
    0FollowersFollow
    0FollowersFollow
    0SubscribersSubscribe
    google.com, pub-6220773807308986, DIRECT, f08c47fec0942fa0

    Latest posts