
Someone threw a molotov cocktail at Sam Altman’s house this weekend. The house is fine. Altman is fine. Altman’s family is fine. The suspect appears to be an Existential Risk guy — someone who read Eliezer Yudkowsky’s book and took it too seriously.
Political violence is bad and counterproductive. One of the reliable, immediate impacts of political violence is a rush to vocally discredit whatever movement the political violence is associated with. So it matters, in moments like this, what conceptual boundaries we place around this phenomenon.
In a substack post yesterday, tech writer Jasmine Sun cast it in terms that I find deeply frustrating. The post is titled “AI Populism’s Warning Shot.” Here’s the key passage:
I define AI populism as a worldview in which AI is viewed not only as a normal technology but as an elite political project to be resisted. It regards AI as a thing manufactured by out-of-touch billionaires and pushed onto an unwilling public to achieve sinister aims like “capitalist efficiency” (layoffs) and “population management” (surveillance). AI populists don’t really care whether ChatGPT is personally useful, or if Waymos eke out some safety gains: AI’s utility as a tool is immaterial relative to the unwelcome societal change it represents.
Among the public, AI populism shows up as individual attempts to block AI encroachment; for example, data center NIMBYism, AI witchhunts among creatives, and in the extreme, assassination attempts like what happened to Altman this week.
According to this definition, the 20-year-old throwing a molotov cocktail at a mansion gets lumped together with the community activists attending zoning board meetings to protest AI data center construction in their neighborhood. According to this definition, both I and Eliezer Yudkowsky (as people focused on AI’s unwelcome societal changes, rather than ChatGPT’s personal utility) are members of the same movement.
That’s ridiculous. Resist the urge to lump all forms of AI criticism into the same umbrella category. It obscures much more than it reveals.
I would sorely like to nip this in the bud before it becomes a thing.
I saw a similar mistake in 2016-2017, as the research community struggled to make sense of Trump and Brexit. Was Donald Trump better understood as a “populist” or as an authoritarian. He fits both definitions. And the built-in advantage of calling Trump a populist was that it provided cover. There are left-populists, like Bernie Sanders, and there are right-populists, like Donald Trump. Populism is a distinct rhetorical tradition. Researchers can talk about the dangers and benefits of “populism” without worrying that they will sound too partisan or outlandish. The disadvantage of treating Trump as a populist was that it obscured much more than it revealed. Trump and Bernie Sanders aren’t two sides of the same coin. There is safety and comfort in hiding what you mean, but there isn’t much clarity in it. It took years of internal scholarly debate before the field at large got comfortable just saying outright what we meant.
What a pointless waste of time that all was.
Here’s another relevant passage from
In 2026, the politics of AI has a new meta: “caring a lot about AI” is no longer correlated with “knowing a lot about AI.” AI is rising in salience faster than any other issue among US voters. Politicians gearing up for the 2026 midterms and 2028 primaries won’t lag far behind. That means AI policy is no longer the remit of a few wonky technocrats. From now until forever, most people regulating, protesting, and talking about AI will not be interested in AI per se, but rather how it impacts their preexisting belief systems and political agendas. These forces are stronger, more diffuse, and more volatile than we have seen in AI policy before. And the curve is just about to shoot straight up.
This is, I think, correct on the merits. AI is now a significant chunk of the economy. The industry has succeeded in stuffing AI into every product and every consumer experience. And that means AI policy is no longer the remit of a small, expert policy community. That was an early times phenomenon. The early times are over. This is what success for the industry looks like.
All the more reason to avoid the instinct to lump all forms of AI resistance into the same category. The Yudkowksy fans throwing molotov cocktails are a distinct discursive community, operating far outside the boundaries of what we might call “normal politics.” The data center challenges are mostly coming from what I would label Sierra Club-types. These are people showing up to community board meetings, writing letters to their legislators, making demands about industrial transparency and reporting, pollution and energy prices.
These two variants have practically nothing in common. The Yudkowsky crowd literally staged a protest this February in favor of billionaires. The Sierra Club types are deeply distrustful of corporate power, and deeply committed to participatory democracy. Bundling them together under the heading of “AI populism” makes it even harder to reach analytic clarity. The sole thread that ties them together is that they both are a problem for the AI industry.

Yudkowsky’s friend group. They are all so weird. Photo by Abigail Van Neely, of MissionLocal
Altman responded to the molotov cocktail incident with a heartfelt plea to turn the temperature down. He blamed last week’s Ronan Farrow/Andrew Marantz article, “Sam Altman May Control Our Future — Can He Be Trusted.” This is a textbook strategic communication play: the molotov cocktail-thrower discredits the entire movement he is associated with, so Altman is associating him with absolutely everyone who voices concerns.
Tech writers shouldn’t be in the business of helping him paint with such a broad brush. There’s really no need to do so.
Ohandbytheway the reason this quite matters, the reason why companies like OpenAI ought to be aggressively in favor of procedural democracy, is that there isn’t a “no backlash of any sort” option available. This isn’t solely the product of too much artificial general intelligence alarmism. This is, for the most part, the necessary byproduct of the size and scale of the industry. You don’t get to bite off this much of the global economy without facing serious questions about what comes out the other end (h/t Henry Farrell).
The options for the AI industry are (a) legitimate backlash or (b) illegitimate backlash. The mass public turns to violence when the avenues for legitimate, orderly contention are unavailable or non-functional. The Luddites smashed machines because it was illegal to forming unions.
I suspect the next few years will see an awful lot of anti-data center activism. People are going to raise their voices and say “we don’t want these data centers raising energy prices here. No more giveaways to Musk/Altman/Zuckerberg.” OpenAI’s lobbyists and comms consultants will surely brand it the “new NIMBYism,” and “AI populism.” They’ll treat participatory democracy as a form of damage and try to rout around it. I expect they’ll be quite cutthroat in their maneuvers.
That’s a mistake though. The friction of participatory democracy creates a pathway for legitimate resistance. If you do away with that friction, the illegitimate alternative you’re left with is firebombs. As I’ve written elsewhere, Democracy is an incredibly good deal for elites, one that they ought to stop taking for granted.
There is a broad sense right now that tech billionaires run the world, entirely unconstrained by the public. This, to a great degree, is because they do. They bought the government, shredded the regulatory constraints, and treated neo-feudalist edgelords as political sages. It was a short-sighted maneuver, destined to fail. Silicon Valley ought to be more appreciative of the social stability provided by democracy. The alternatives are so much worse.