spot_img
Tuesday, December 24, 2024
More
    spot_img
    HomeTechnologyThis article is OpenAI training data

    This article is OpenAI training data

    -

    OpenAI ChatGPT 40

    SUKIAN, CHINA – MAY 14: In this photo image, the sign of GPT-4o is seen in Suqian, Jiangsu Province, China on May 14, 2024. OpenAI released GPT-4o, a new artificial intelligence model, on May 14.

    You can’t read too much about the risks of advanced AI without coming across the Paperclip Maximizer thought experiment soon.

    First put forward by the Swedish philosopher Nick Bostrom In his 2003 paper “Ethical Issues in Advanced Artificial Intelligence,” the thought experiment goes like this: Imagine an artificial general intelligence (AGI), unlimited in its power and intelligence. This AGI was programmed by its creators with the goal of making paperclips. (Why would anyone program a powerful AI to create paperclips? Don’t worry about it – absurdity is the point.)

    Because the AGI is super intelligent, it quickly learns how to make a paperclip out of anything. And because the AGI is superintelligent, it can anticipate and thwart any attempt to stop it — and will because one of its instructions is to make more paper clips. If we try to turn off AGI, it will fight because it can’t make more paperclips if it’s turned off — and it will win, because it’s superintelligent.

    The final result? The entire galaxy, including you, me and everyone we know, has either been destroyed or turned into paper clips. (As AI Arch-Dumer Eliezer Yudkowsky has been written: “The AI ​​doesn’t hate you, nor does it love you, but you are made of atoms that it can use for something else.”) End the thought experiment.

    The point of the Paperclip Maximizer test is twofold. One, we can expect AIs to be optimizers and maximizers. Given a goal, they’ll try to find the best strategy to maximize achievement of that goal without worrying about the side effects (which in this case involve turning the galaxy into paper clips).

    Understanding AI and the companies that create it

    Artificial intelligence is poised to change the world from media to medicine and beyond – and Future Perfect has it covered.

    • Leaked OpenAI documents reveal offensive tactics against ex-employees
    • ChatGPT can talk, but the OpenAI staff can’t confirm
    • “I’ve lost faith”: Why the OpenAI team tasked with saving humanity has crashed

    Two, it’s so important to carefully align AI’s objectives with what we truly value (which probably doesn’t involve turning the galaxy into more or less paperclips in this case). As ChatGPT told me when I asked about the thought experiment, “It emphasizes the need for ethical considerations and control mechanisms in the development of advanced AI systems.”

    While the paperclip maximizer experiment is clever as an analogy for the AI ​​alignment problem, it always strikes me as a little beside the point. Can you really create an AI that knows how to turn every atom in existence into a paper clip, but realizes that such an outcome is not something we, its creators, want? This imaginary artificial brain really has nowhere to stop along the way—perhaps after it turns Jupiter into 2.29 x 1030 Paperclips (thank you, ChatGPT, for counting) — and think, “Maybe there’s a downside to a universe made up only of paperclips”?

    May be. Maybe or not.

    Let’s make a deal – or else

    I’ve been thinking about the Paperclip Maximizer thought experiment since I learned Thursday morning that Vox Media, the company that owns Future Perfect and Vox A license agreement is signed With OpenAI allowing its published components to be used to train its AI models and shared within ChatGPT.

    The precise details of the deal — including how much Vox Media will make to license its content, how often the deal can be renewed and what kind of protections there might be for certain types of content — aren’t yet entirely clear. In a press releaseJim Bankoff, co-founder, CEO and chair of Vox Media, said the deal “aligns with our goals of using generative AI to innovate for our audiences and customers, protect and enhance the value of our work and intellectual property, and increase productivity. Our exceptional journalists and Discoverability to enhance the talent and creativity of creators.”

    Vox Media is hardly alone in striking such a deal with OpenAI. the atlantic announcement A similar deal on the same day. (See The Atlantic Editor Damon Beres It’s great take.) Over the past few months, publishing companies have represented more than 70 newspapers, websites and magazines. Licensed their content to OpenAIincluding the Wall Street Journal Axel Springer owns News Corp, Politico and Financial times.

    OpenAI’s motivation for such deals is clear. For one thing, it constantly needs fresh training data for its large language models, and news websites like Vox have millions of professionally written, fact-checked, and copy-edited words (like this one!). And because OpenAI works to make sure its chatbots can answer questions correctly, news articles are a more valuable source of up-to-date factual information than you can find on the web as a whole. (While I can’t say I’ve read every word Vox publishes, I’m pretty sure you won’t find anything in our archives that suggests Google’s new generative AI search function adds cheese to pizza. Overview apparently did.)

    Signing the licensing agreement shields OpenAI from the looming threat of lawsuits from media companies that believe the AI ​​startup is already using their content to train their models (perhaps as has been) that is precisely the argument being made by the New York Times, which in December the case OpenAI and its main funder Microsoft for copyright infringement. A number of other newspapers and news websites A similar case has been launched.

    Vox Media has chosen a different path and it’s not hard to see why. If Company refuses to license its content, such data scraping will continue without compensation. The litigation route is long, expensive and uncertain, and presents a classic collective action problem: Until the media industry as a whole unites and refuses to license its content, individual rebellions by individual companies will only mean so much. And journalists are so delusional – we can’t reconcile something so big as to save our lives, even if it could do just that.

    I’m not a media executive, but I’m sure that on the balance sheet, getting something looks better than nothing — even if such a deal sounds more like a hostage negotiation than a business.

    But while I’m not a media executive, I’ve been in the business for over 20 years. During that time, I saw our industry pin our hopes on search engine optimization; video pivot (and back again); Facebook and social media traffic. I can remember Apple coming to my office in 2010 at Time magazine, our promise That would save the iPad magazine business. (It didn’t.)

    Each time, we promise a fruitful collaboration with technology platforms that can benefit both parties. And every time, it doesn’t work out in the end because the interests of those technology platforms don’t align with the media and never fully converge. But sure—maybe this time Lucy will not drag the football away.

    OpenAI Reporting

    For Future Perfect specifically, our parent company has an agreement with OpenAI to license all of our content regardless of the fact that certain optics present problems. Over the past two weeks, Future Perfect reporters and editors, led by Kelsey Piper and Seagal Samuel, have published a series of investigative reports that cast serious doubt on OpenAI’s credibility as a company, and specifically its CEO Sam Altman. You should read them – as anyone else considering signing a similar contract with the company.

    Such stories will not change. I can promise you, our readers, that Vox Media’s deal with OpenAI will have no impact on how we report on Future Perfect or the rest of the company’s Vox reports. Just as we would never favor a company advertising on the Vox website, our coverage of OpenAI will not change because of the licensing agreement we signed with our parent company. That’s our commitment, and it’s one that everyone I work with here, both above and below me, takes very seriously.

    That said, Future Perfect is a mission-driven category, created specifically to write about things that really matter to the world, to explore ways to do better, to contribute ideas that can make the future a more perfect place. This is why we are funded mainly by philanthropic sources rather than advertising or sponsorship And I can’t say it’s nice to know that every word we’ve written and will write for the foreseeable future will end up as training data, however small, for an AI company that has repeatedly shown, Mission statements asideThat it does not seem to be working for the welfare of all humanity.

    But my biggest concern has less to do with what this deal and others like it mean for Future Perfect or even the media business more broadly than the platform that both media companies and AI giants share: the Internet. Which brings me back to maximizing paperclips.

    Playing the paper clip scene

    AIs are not only maximizers; So are the companies that make AIs.

    From OpenAI to Microsoft to Google to Meta, there are companies in the AI ​​business Engage in a brutal competition: for data, for computing power, for human talent, for market share, and ultimately for profit. As for who targets their paperclip, and what they are doing now Hundreds of billions of dollars Flows in the AI ​​industry, it’s all they can do to maximize them.

    The problem is that maximization, as the paper clip scenario shows, leaves little room for anyone else. What these companies ultimately want to create is the ultimate answer, AI products capable of answering any question and fulfilling any task its users can imagine. Whether it’s Google’s AI overview function that aims to eliminate the need to click a link on the web — “Let Google do the Googling for you,” as the motto goes. Went to the company’s recent developer event — or a souped-up ChatGPT with access to all the latest news, the desired end result being an omniscient oracle. Ask questions, get answers — no boring stops at authors or websites in between.

    This is obviously not good for those of us who make our living writing, or podcasting, or making videos on the web. As Jessica Lessin, founder of tech news site The Information, Recently wroteMedia companies signing deals with OpenAI cheered: “It’s hard to see how any AI product built by a tech company would create meaningful new distribution and revenue for news.”

    Already predicted is the rise of AI chatbots and generative AI search products like Google’s overview may be Search engine traffic to publishers will drop by as much as 25 percent by 2026. And arguably the better these bots get, thanks to deals with media companies like these, the faster change can happen.

    Like I said, bad for us. But a world where AI increasingly serves as the one and only answer, as Judith Donath and Bruce Schneier Recently wrote, which “threatens to destroy the complex online ecosystem that allows writers, artists and other creators to reach human audiences.” And if you can’t even connect with an audience with your content — let alone get paid for it — the imperative to create more work dissolves. It wouldn’t just be news — the endless web itself could stop growing.

    So, AI is bad for all of us, including companies. What happens when, while relentlessly trying to hoover up every possible piece of data that can be used to train their models, AI companies destroy the reasons for humans to generate more data? Surely they could envision that possibility? Surely they would not be so singular as to destroy the raw material on which they depend?

    Yet just as AI relentlessly pursues its singular goal in Bostrom’s thought experiment, so do today’s AI companies Until they do little more than paperclip the news, the web, and those who were once a part of it.

    Source link

    Related articles

    Stay Connected

    0FansLike
    0FollowersFollow
    0FollowersFollow
    0SubscribersSubscribe
    google.com, pub-6220773807308986, DIRECT, f08c47fec0942fa0

    Latest posts