spot_img
Wednesday, December 25, 2024
More
    spot_img
    HomeFuture PerfectHow AI's booms and busts are a distraction

    How AI’s booms and busts are a distraction

    -

    A photo image of GPT-4o seen on May 14, 2024. CG/VCG via Getty Images

    What does this mean for AI security if this whole AI thing A bust bit?

    “All this hype and no substance?” A question more people have been asking Lately Regarding generative AI, points out that there is Delay in model releasethat The rise of commercial applications has been slowthat The success of the open source model Makes it hard to make money from ownership and this whole thing costs a lot of money.

    I think a lot of people who call the “AI bust” don’t have a firm grasp on the whole picture. Some of them are people who have been insisting all along that there are Generative AI has nothing As a technology, a scenario that many real users of AI and by using.

    And I think some people have a frankly naïve view of how fast commercialization needs to happen. Even for an incredibly valuable and promising technology that will ultimately be transformative, it takes time between when it’s invented and when someone delivers a wildly popular consumer product based on it. (Electricity, for example, It took decades between innovation and truly widespread adoption.) “The killer app for generative AI hasn’t been invented yet” seems true, but that’s not a good reason to assure everyone that it won’t be invented soon.

    But I think there’s a cool “case for guts” that doesn’t depend on misunderstanding or underestimating the technology. It seems plausible that subsequent rounds of ultra-expensive models will still fail to solve the hard problems that make them worth billions of dollars worth of training — and if that happens, we’ll probably be in for a shorter time. The excitement is more iterations and improvements to existing products, less bombshell new releases and less obsessive coverage.

    If so, it will likely have a huge impact on attitudes towards AI security, although in principle AI security does not depend on the AI ​​hype of the past few years.

    Fundamentals of AI security is one I’ve been writing about since long before ChatGPT and the recent AI craze. The general case is that there is no reason to think that AI models that can reason as well as humans – and faster – are not possible, and we know that they will be commercially valuable if developed. And we know that it would be very dangerous to develop and release robust systems that can operate independently in a world without oversight and supervision that we don’t actually know how to provide.

    Many of the technologists are working with large language models faith Systems that are robust enough that these security concerns go from theory to the real world Just around the corner. They can be right, but they can also be wrong. What I empathize with the most is the engineer By Alex Irpan: “The current paradigm is unlikely [just building bigger language models] Gets all the way there. Chances are still more than comfortable with me.”

    It is probably true that the next generation of large language models will not be robust enough to be dangerous. But many people working on it believe it will, and given the huge implications of unchecked power AI, the chance is not so low that it can be dismissed as frivolous, which warrants some oversight.

    How AI Security and AI Hype Intertwine

    In reality, unless the next generation of large language models is much better than what we currently have, I expect that AI will still change our world — more slowly. Many ill-conceived AI startups will go out of business and many investors will lose money — but people will continue to improve our models at a fairly rapid pace, making them cheaper and eliminating their most annoying shortcomings.

    Even the most vocal skeptics of generative AI, like Gary Marcus, tend to tell me that superintelligence is possible; They hope that this requires a new technological paradigm, some way of combining the strengths of large language models with other approaches that address their shortcomings.

    Although Marcus identifies as an AI skeptic, it is often difficult to find significant differences between his views and those of someone like Ajay Kotra, who thinks Powerful intelligent system It can be language-model driven in a sense analogous to a car engine being driven, but there will be lots of additional processes and systems to convert their outputs into something reliable and usable.

    People I know who worry about AI security often expect things to go this way. It means a little more time to better understand the systems we’re building, time to see the results of using them before they become meaningfully robust. AI security is a suite of difficult problems, but not unsolvable problems. Maybe we will solve everything if we give it some time.

    But my impression of the public conversation around AI is that many people believe that “AI safety” is a certain worldview, unshakable from the AI ​​fever of the last few years. “AI security,” as they understand it, claims that superintelligent systems are going to be here in the next few years—supporting Leopold Aschenbrenner’s “view.”Situational awarenessAnd reasonably common among AI researchers at top companies.

    If we don’t get superintelligence in the next few years, then I expect to hear a lot of “It turns out we don’t need AI security.”

    Keep your eye on the big picture

    If you’re an investor in AI startups today, whether GPT-5 is delayed six months or OpenAI is going to happen matters deeply. A subsequent reduction in valuation raises Rs.

    If you’re a policymaker or a concerned citizen, however, I think you need to distance yourself a little bit more from that and separate the question of where we’re going from the question of whether current investors’ bets will pay off. the society

    Whether GPT-5 is a strong intelligent system or not, a strong intelligent system will be commercially valuable and thousands of people are working from different angles to build it. We should think about how we interact with such systems and ensure they are developed safely

    If a company loudly proclaims that they’re going to build a robust security system and fails, the takeaway shouldn’t be “I guess we don’t have anything to worry about.” It should be “I’m glad we have a little more time to figure out the best policy response.”

    As long as people are trying to build extremely robust systems, security will be important – and the world can’t afford to be either blinded by the hype or reactively dismissed as a result.

    Source link

    Related articles

    Stay Connected

    0FansLike
    0FollowersFollow
    0FollowersFollow
    0SubscribersSubscribe
    google.com, pub-6220773807308986, DIRECT, f08c47fec0942fa0

    Latest posts