spot_img
Thursday, December 26, 2024
More
    spot_img
    HomeFuture PerfectOpenAI as we knew it is dead

    OpenAI as we knew it is dead

    -

    A blurry double-exposure photo of two versions of Sam Altman's head.

    Sam Altman.

    OpenAI, the company that brought you ChatGPT, sold you out.

    Since its inception in 2015, its leaders have said Their top priority is ensuring artificial intelligence is developed safely and beneficially. They cite the company’s unusual corporate structure as a way to prove the purity of its intentions. OpenAI was a non-profit controlled not by its CEO or shareholders, but by a board with a single mission: keep humanity safe.

    But this week, the The news broke That OpenAI will no longer be governed by a non-profit board. OpenAI is becoming a full-fledged for-profit utility corporation. Oh, and CEO Sam Altman, who previously insisted he had no equity in the company, will now receive billions worth of equity in addition to ultimate control over OpenAI.

    In an announcement that hardly seems coincidental, Chief Technology Officer Meera Murati said she was leaving the company shortly before the news broke. The employees were so blind that many of them were found out response His abrupt departure was accompanied by a “WTF” emoji on Slack.

    Really WTF.

    The whole point of OpenAI was to be non-profit and security-first. It began to shift from that view a few years ago when, in 2019, OpenAI created a profitable arm to make the kind of massive investment it needed from Microsoft as the cost of building advanced AI rose. But some of its employees and outside admirers still hoped the company would stick to its principles. That hope may now be dormant.

    “We can say goodbye to the original version of OpenAI that wanted to be limited by financial constraints,” Jeffrey Wu, who joined the company in 2018 and worked on early models like GPT-2 and GPT-3, told me.

    “The reorganization around a core for-profit entity formalizes what outsiders have known for some time: that OpenAI is trying to profit in an industry that has received a lot of investment over the past few years,” said Sarah Kreps, director of tech at Cornell Policy. Institute. The move is a departure from OpenAI’s “emphasis on security, transparency and decentralization of power.”

    And if this week’s news is the final death knell for OpenAI’s lofty establishment vision, it’s clear who killed it.

    How Sam Altman Became an Existential Risk to OpenAI’s Mission

    While OpenAI was co-founded in 2015 by Elon Musk (along with Altman and others), who was concerned that AI could pose an existential threat to humanity, the emerging research lab Introduced himself To the world with these three sentences:

    OpenAI is a non-profit artificial intelligence research organization. Our mission is to advance digital intelligence in ways that benefit humanity as a whole, not limited to the need to generate financial returns. Because our research is free from financial obligations, we can better focus on a positive humanitarian impact.

    All of that is now objectively false.

    Since Altman took the helm of OpenAI in 2019, the company has been moving away from its mission. That year, the company — originally meant to be non-profit — created a for-profit subsidiary so it could pull the huge investments needed to build cutting-edge AI. But it did something unprecedented in Silicon Valley: It limited how much profit investors could make. They can get up to 100 times what they put in, but beyond that, the money will go to nonprofits, which will use it for public good. For example, it could finance a universal basic income program to help adjust to automation-induced unemployment.

    Over the next few years, OpenAI increasingly lost its focus on security as it rushed to commercialize the product. By 2023, the nonprofit’s board had become so suspicious of Altman that it tried to oust him. But he quickly made his way back to power, using his relationship with Microsoft to stack a new board in his favor. And earlier this year, OpenAI’s security team fell apart as employees lost confidence in Altman and left the company.

    Now, Altman has taken the ultimate step to consolidate his power: He’s stripped the board of its controls entirely. Although it would still exist, it would have no teeth.

    “It seems to me that the original nonprofit organization has been disempowered and its mission has been reinterpreted to align entirely with profit,” Wu said.

    Profits could be what Altman thinks the company desperately needs. Despite a Very confident blog post Published this week, where he claims AI will help “fix the climate, establish a space colony, and discover all of physics,” OpenAI is actually in a jam. It has struggled to find a clear route to financial success for its models, which cost millions – if not billions – to build. Restructuring the business for profit can help attract investors.

    But the move has some observers — including musk myself — Ask: How can it possibly be legal?

    If OpenAI removed the profit cap, it would redirect a huge amount of money — potentially billions of dollars in the future — from nonprofits to investors. Because nonprofits are there to represent the people, that effectively means diverting billions away from people like you and me As there are some NotingIt felt a lot like stealing.

    “If OpenAI removed the profit caps from investments retroactively, it would transfer billions in value from a nonprofit to for-profit investors,” said Jacob Hilton, a former employee of OpenAI who joined before it transitioned from a nonprofit to a nonprofit. Limited-profit structure. “Unless the nonprofit is properly compensated, it would be a money grab. In my view, this kind of thing would be inconsistent with OpenAI’s charter, which says OpenAI’s primary fiduciary duty is to humanity, and I don’t see how the law could allow it.” .”

    But because OpenAI’s framework is so unprecedented, the legality of such a change may seem puzzling to some. And that may be exactly what the company is counting on.

    When asked for comment, OpenAI asked only to refer to its statement Bloomberg. There, a company spokesperson said, OpenAI is “focused on building AI that benefits everyone,” adding that “non-profits are core to our mission and will continue to exist.”

    The take-home message is clear: control, control, control

    Advocates for AI protections argue that we need to pass regulations that would provide some oversight of big AI companies — such as California’s SB 1047 bill, which Gov. Gavin Newsom must either sign into law or veto in the next few days.

    Now, Altman has nicely made their case for them.

    “The general public and regulators should be aware that by default, AI companies will be encouraged to ignore some of the costs and risks of deploying AI — and those risks are likely to be substantial,” Wu said.

    Altman is also validating the concerns of his former employees who have released a proposal to give employees of major AI companies the “right to raise concerns” about advanced AI. According to the proposal: “AI companies have strong financial incentives to avoid effective oversight, and we do not believe that prescriptive structures of corporate governance are sufficient to change this.”

    Apparently, they were right: OpenAI was meant to reign over the non-profit arm, but Altman overturned exactly that structure.

    After years of sweet-talking the press, the public and congressional policymakers, assuring them that OpenAI wants control and cares more about security than money, Altman isn’t even bothering to play the game. He is showing everyone his true colors.

    Governor Newsom, are you watching this?

    Congress, are you watching this?

    World, do you see this?

    Source link

    Related articles

    Stay Connected

    0FansLike
    0FollowersFollow
    0FollowersFollow
    0SubscribersSubscribe
    google.com, pub-6220773807308986, DIRECT, f08c47fec0942fa0

    Latest posts