In 2020, when Joe Biden won the White House, generative AI still looked like a meaningless toy, not a world-changing new technology. The first major AI image generator, DALL-E, will not be released As of January 2021 – and that certainly wouldn’t put an artist out of business, because it still had problems Creating basic images. The release of ChatGPT, which took AI mainstream overnight, was still more than two years away. AI-based Google search results — like it or not — will now seem inevitable, unimaginable.
In the world of AI, four years is a lifetime. This is one of the things that makes AI policy and regulation so difficult. The gears of policy continue to grind slowly. And every four to eight years, they grind in reverse, when a new administration comes to power with different priorities.
This works tolerably well in, say, our food and drug regulation or other areas where change is slow and bipartisan consensus on policy exists more or less. But when regulating a technology that is essentially too young to enter kindergarten, policymakers face a daunting challenge. This is all the more so as we experience a sharp shift in policymakers in the US following Donald Trump’s victory in Tuesday’s presidential election.
This week, I reached out to people to ask: What will AI policy look like under the Trump administration? Their guesses were all over the place, but the overall picture is this: Unlike many other issues, Washington is not yet fully polarized on the question of AI.
Trump’s supporters include members of the accelerator tech right, led by venture capitalist Marc Andreessen, who are fiercely opposed to regulation of an exciting new industry.
However, Elon Musk is on Trump’s side, who California supported SB 1047 To control the AI, and has been anxious In the long run that AI will spell the end of the human race (a position that’s easy to dismiss as classic masked zaniness, but is actually quite mainstream).
Trump’s first administration was chaotic and featured the rise and fall of various chiefs of staff and top advisers. Few of those close to him at the start of his tenure in office were even at the bitter end. Where AI policy goes in its second term may depend on who has its ear at the crucial moment.
Where the new administration stands on AI
In 2023, the Biden administration issued a Executive Order on AIWhich, while generally modest, marked the first government effort to take AI risk seriously. Trump’s campaign platform said this executive order “AI stifles innovation and imposes far-left ideas on the development of this technology,” and vowed to repeal it.
“Biden’s executive order on AI will probably be repealed one day,” Samuel Hammond, a senior economist at the Foundation for American Innovation, told me, adding, “It’s uncertain what will replace it.” D AI Safety Institute Made under Biden, Hammond noted, has “broad, bipartisan support” — however that would be Congress is responsible for approving and funding it appropriatelyThey can and should do something this winter.
In Trump’s orbit is a draft of a proposed replacement executive order that would Create a “Manhattan Project” for military AI and create industry-led organizations for model evaluation and safety.
In the past, though, it’s been challenging to predict what will happen because the coalition that brought Trump to office is actually sharply divided on AI.
“How Trump is approaching AI policy will provide a window into the tensions on the right,” Hammond said. “You have people like Marc Andreessen who want to slam the gas pedal and people like Tucker Carlson who worry technology is already moving too fast. JD Vance is a realist on these matters, seeing AI and crypto as an opportunity to break Big Tech’s monopoly. Elon Musk wants to accelerate technology in general while taking existential risks from AI seriously. They are all united against ‘weak’ AI, but their positive agenda on how to manage the real-world risks of AI is less clear.”
Trump himself Didn’t comment much on AI, but when he did — as he did one Interview with Logan Paul earlier this year — He seems familiar with both the “rush to defense against China” perspective and the expert fear of destruction. “We have to stay ahead,” he said. “It’s going to happen. And if that’s going to happen, we have to lead over China.
On whether to create AI that operates independently and takes over control, he said, “You know, there are people who say it takes over from humanity. It’s really powerful stuff, AI. So let’s see how it all works.”
In one sense that’s an incredibly irrational attitude to have about the literal possibility of the end of the human race — you can’t see how an existential threat “works” — but in another, Trump is actually pretty much taking on the mainstream view here.
Many AI experts believe that the possibility of AI taking over humanity is a real one and could happen within the next few decades, and also think that we still don’t know enough about the nature of that risk to make effective policies. So clearly around this, a lot of people have a policy of “It could kill us all, who knows? I guess we’ll see what happens,” and Trump, as he often proves, is mostly unusual for just coming out and saying it.
We cannot afford polarization. Can we avoid it?
A lot has happened with Republicans about AI Calling equity and bias concerns “woke” nonsenseBut as Hammond observes, there is also a fair bit of bipartisan consensus. No one in Congress wants to see the United States fall back militarily, or choke a promising new technology in its cradle. And no one wants random tech companies to develop extremely dangerous weapons without any oversight.
Meta’s chief AI scientist Ian LeCun, who is a Outspoken Trump criticAlso a AI is an outspoken critic of security concerns. Musk supported California’s AI regulation bill — which was bipartisan, and was vetoed by a Democratic governor — and of course Musk also enthusiastically endorsed Trump for president. Right now, it’s hard to worry about extremely powerful AI on the political spectrum.
But this is actually a good thing, and it would be disastrous if it changed. With a rapidly-evolving technology, Congress must be able to flexibly create policy and empower an agency to implement it. Bias makes this impossible.
More than any specific item on the agenda, the best sign of the Trump administration’s AI policy will be if it continues to be bipartisan and focused on things that all Americans, Democratic or Republican, agree on, like we don’t want. Everyone dies at the hands of super intelligent AI. And the worst sign would be if AI’s complex policy questions were rounded up into a simple “control is bad” or “military is good” view, which misses the point.
Hammond, for his part, is optimistic that the administration is taking AI seriously. “They’re thinking about the right object-level issues, like the national security implications of AGI, that are years away,” he said. Whether that will lead them to the right policy remains to be seen — but even in the Harris administration it was highly uncertain.