I agree directionally with this but it drives me nuts how much special pleading there is about what high-profile companies like OpenAI do, vs. what low-profile industry giants like Cisco and Oracle have been doing for a whole generation. The analysis can't credibly start and end with what OpenAI is doing.
I missed that the article is talking about Gulf monarchy autocrats instead of U.S. autocrats.
That is very simple: First, dumping graphics cards on trusting Saudi investors seems like a great idea for Nvidia. Second, the Gulf monarchies depend on the U.S. and want to avoid Islamic revolutions. Third, they hopefully use solar cells to power the data centers.
Will they track users? Of course, and GCHQ and the NSA can have intelligence sharing agreements that circumvent their local laws. There is nothing new here. Just don't trust your thoughts to any SAAS service.
I would have thought it obvious that LLMs' primary usefulness is as force-multipliers of the messaging sent out into a society. Each member of hoi polloi will be absolutely cocooned in thick blankets of near-duplicative communications and interactions most of which are not human. The only way to control the internet, you see, proved to be to drown it out.
Building anything for autocrats probably isn't good for democracy, tbh. If you want democracy to be healthy, you probably want to maximize the amount of wealth of the working class. People who have enough to care for themselves and their families and their communities a little will have enough time and education to meaningfully participate in democracy.
Whether you are building for US autocrats, gulf state autocrats, Russian autocrats, whatever... maybe it's better to not do that? (I know, easier said than done.)
I would say that implication is in the opposite direction. Any sufficiently high-profile technology can turn any government autocratic/totalitarian. It's like a greatest temptation for people in power to have everything under control, which gives birth to horrible country and people governance
If you do business with the rest you will just strengthen inhumane regimes, which unfortunately not only kill their own people, but these regimes also attack their neigbors (see Ukraine and maybe soon Taiwan).
Remember how the central idea of Orwell's 1984 was that TVs in everyone's home were also watching all time and someone behind that device actually understanding what they see?
That last part was considered dystopian: there can't possibly be enough people to watch and understand every other person all day long. Plus, who watches the watchers? 1984 has been just a scary fantasy because there is no practical way to implement it.
For the first time in history, the new LLM/GenAI makes that part of 1984 finally realistic. All it takes is a GPU per household for early alerting of "dangerous thoughts", which is already feasible or will soon be.
The fact that one household can be allocated only a small amount of compute, that can run only basic and poor intelligence is actually *perfect*: an AGI could at least theoretically side with the opposition by listening to the both sides and researching the big picture of events, but a one-track LLM agent has no ability to do that.
I can find at least 6 companies, including OpenAI and Apple, reported working on always-watching household device, backend by the latest GenAI. Watching your whole recent life is necessary to have enough context to meaningfully assist you from a single phrase. It is also sufficient to know who you'll vote for, which protest one might attend before it's even announced, and what is the best way to intimidate you to stay out. The difference is like between a nail-driving tool and a murder weapon: both are the same hammer.
During TikTok-China campaign, there were a bunch of videos showing LGBT people reporting how quickly TikTok figured their sexual preferences: without liking any videos, no following anyone, nor giving any traceable profile information at all. Sometimes before the young person has admitted that for themselves. TikTok figures that simply by seeing how long the user stares at what: spending much more time on boys' gym videos over girls', or vice versa, is already enough. I think that was used to scare people of how much China can figure about Americans from just app usage?
Well if that scares anyone, how about this: an LLM-backend device can already do much more by just seeing which TV shows you watch and which parts of them give you laugh or which comments you make to the person next to you. Probably doesn't even need to be multimodal: pretty sure subtitles and text-to-speech will already do it. Your desire to oppose the upcoming authoritarian can be figured out even before you admit it to yourself.
While Helen Toner (the author) is worried about democracies on the opposite end of the planet, the stronghold of democracy may as well be nearing the last 2 steps to achieve the first working implementation of Orwellian society:
1. convince everyone to have such device in their home for our own good (in progress)
2. intimidate/seize the owning company to use said devices for not our own good (TODO)
I spent 6 months in Saudi in 2022 and 2 months in 2023, and I went to the UAE and did business with both countries. They've been on this wave like the world has. The UAE mandated AI education in their schools, are acquiring and training talent, and have huge, huge amounts of money invested in developing on all fronts from education to engineering to application for their 2031 initiative. The AI genie is out of the bottle, and although OpenAI is a hybrid non-profit / for-profit company in that they have for-profit LLCs that are managed by the non-profit, so this gets murky. See OpenAI Holdings, LLC and OpenAI Global, LLC. I don't think you can monopolize technology such as AI when computing hardware is expanding in capability and cost is going down as well as the immediate ability to read research papers, YouTube videos, books, and all media available on the internet or other media venues. The Saudi Digital Library in the largest digital library in the Arab world. Is China's AI better than the West's? They've certaily implemented a lot of real world installations and applications of it, but I am not sure how it compares to the NSA pre and post Snowden.
I have a question. In what sense is OpenAI going to assist UAE in building large-scale data centers suitable to machine learning workloads? Do they have experience and expertise doing that?
This is the great filter upon us more than anything else, even nuclear armageddon.
Virtually every "democracy" has a comprehensive camera monitoring system, tap into comm networks, have access to full social graph, whatever you buy,know all your finances, and if you take measures to hide it ... Know that you do that.
Previously the fire hose information being greater than the capability of governments to process it was our saving Grace from TurnKey totalitarianism.
With AI it's right there. Simple button push. And unlike nuclear weapons, it can be activated and no immediate klaxon sounds up. It can be ratcheted up like a slow boil, if they want to be nice.
Oh did I forget something? Oh right. Drones! Drones everywhere.
Oh wait, did I forget ANOTHER thing? Right right, everyone has mobile devices tracking their locations, with remote activatable cameras and microphones.
Better do business with UAE and reap the benefits, than let the benefits eventually go to China.
Trying to forever suppress the middle east obviously hasn't worked, so this is just realpolitik with the obvious right choice being what is being done now imho. Saudis are gonna be autocratic in any case, this is just good Hearts of Iron gameplay in real life.
It's not about democracy in the middle east at all. China is more democratic than those countries. It's about containing china s rise as it grows to be a direct rival to the US Western hegemony . That would be an affront to democracy
The biggest danger of AI isn't that it will revolt but that it'll allow dictators and other totalitarians complete control over the population.
And I mean total. A sufficiently advanced algorithm will be able to find everything a person has ever posted online (by cross referencing, writing style, etc.) and determine their views and opinions with high accuracy. It'll be able to extrapolate the evolution of a person's opinions.
The government will be able to target dissidents even before they realize they are dissidents, let alone before they have time to organize.
Because of this, I agree that it is a very good thing for the US to have
more AI compute, including more AI supercomputers, than China.
But if you take this argument seriously at all, the implication of the UAE
deal becomes plain: this is a significant power boost for the UAE’s
autocratic government.
This is a wonderful exposition of how Realpolitik allows one can have an anti-autocracy cake and eat it too.
The article asks what is the meaning of OpenAI's statement that "the UAE will become the first country in the world to enable ChatGPT nationwide."
My first guess would be that it will be a geo-fenced service, in particular UAE residents will have (subsidised) access to it and perhaps not to the global service, and it will have a system prompt designed and tuned in consultation with the UAE government.
How likely are believers in Super AGI for the good of the human race to be worried about downsides of enabling them for dodgy regimes in the short term?
San Altman's lack of scruples, notwithstanding, I find the distinction between democracies (USA, Israel, India) that oppress, occupy and murder hundred of thousands, and autocratic regimes (Saudis, China, N. Korea) who censor, imprison, or execute a few dozens of their opponents, to be a distinction without a difference.
A sizable population of the US voted for Trump to address the issue of illegal immigration. People who keep denying the reality of other people’s views which aren’t aligned with their own cannot be taken seriously, especially when they’re talking about democracy. Your political opponents winning democratically and enacting their agenda, is not an autocracy.
I do not find her critique of argument #2 compelling [1]. Monetization of AI is key to economic growth. She's focused on the democratic aspects of AI, which frankly aren't pertinent. The real "race" in AI is between economic and financial forces, with huge infrastructure investments requiring a massive return on investment to justify the expense. From this perspective, increasing the customer base and revenue of the company is the objective. Without this success, investment in AI will drop, and with it, company valuations.
The essay attempted to mitigate this by noting OAI is nominally a non-profit. But it's clear the actions of the leadership are firmly aligned with traditional capitalism. That's perhaps the only interesting subtly of the issue, but the essay missed this entirely. The omission could not have been intentional, because it provides a complete motivation for item #2.
[1] #2 is 'The US is a democracy and China isn’t, so anything that helps the US “win” the AI “race” is good for democracy.'
I appreciate that people are thinking about these things, but I still can't take the idea seriously that transformers represent a threat to democracy. Maybe with a massive enough supercomputer a country could run an AI IDE capable of end-to-end writing a device driver in Rust - but even that's not a given. Certainly, it's almost meaningless in the face of building our lives around a network of personal surveillance devices that we literally never part with. I'm just saying... ChatGPT is the least of our problems.
I would not do business with Kim Jong Un. He is murdering a lot of his own people. Or with Putin. He is murdering a lot of Ukrainians.
But guess what: both North Korea and Russia are under sanctions. You can't do business with them anyway.
But the UAE is not under sanctions. Which means that in the opinion of the US Government it is ok to do business with them. Then who is Open AI to say otherwise? Why should it be any of their concern to determine who is a good guy or a bad guy in the world? Shouldn't there be a division of responsibilities? Let the Department of State determine who is good and who is bad, and let companies do business with those who are not on the sanctions list.
If democracy builds supercomputers (and bombs, propaganda, prisons) for autocrats, of what good is democracy? The evidence points strongly to democracy and autocracy being friends, even "good cop, bad cop"
I have a bit of a problem with TFA's implication that Western countries don't do the exact same things but to a lesser degree. We need to get over this dumb eurocentric idea that we have the best system of government and the entire world suffers for the lack of it. I'm reminded of back in the "Arab spring" when journalists and politicians were praising social media as a "democratizing force" only to clamp down on free speech with the "mis/dis/malinformation" slander about 5 years later.
Building supercomputers for autocrats probably isn't good for democracy
(helentoner.substack.com)444 points by rbanffy 8 June 2025 | 258 comments
Comments
That is very simple: First, dumping graphics cards on trusting Saudi investors seems like a great idea for Nvidia. Second, the Gulf monarchies depend on the U.S. and want to avoid Islamic revolutions. Third, they hopefully use solar cells to power the data centers.
Will they track users? Of course, and GCHQ and the NSA can have intelligence sharing agreements that circumvent their local laws. There is nothing new here. Just don't trust your thoughts to any SAAS service.
Whether you are building for US autocrats, gulf state autocrats, Russian autocrats, whatever... maybe it's better to not do that? (I know, easier said than done.)
If you do business with the rest you will just strengthen inhumane regimes, which unfortunately not only kill their own people, but these regimes also attack their neigbors (see Ukraine and maybe soon Taiwan).
Money does stink!
That last part was considered dystopian: there can't possibly be enough people to watch and understand every other person all day long. Plus, who watches the watchers? 1984 has been just a scary fantasy because there is no practical way to implement it.
For the first time in history, the new LLM/GenAI makes that part of 1984 finally realistic. All it takes is a GPU per household for early alerting of "dangerous thoughts", which is already feasible or will soon be.
The fact that one household can be allocated only a small amount of compute, that can run only basic and poor intelligence is actually *perfect*: an AGI could at least theoretically side with the opposition by listening to the both sides and researching the big picture of events, but a one-track LLM agent has no ability to do that.
I can find at least 6 companies, including OpenAI and Apple, reported working on always-watching household device, backend by the latest GenAI. Watching your whole recent life is necessary to have enough context to meaningfully assist you from a single phrase. It is also sufficient to know who you'll vote for, which protest one might attend before it's even announced, and what is the best way to intimidate you to stay out. The difference is like between a nail-driving tool and a murder weapon: both are the same hammer.
During TikTok-China campaign, there were a bunch of videos showing LGBT people reporting how quickly TikTok figured their sexual preferences: without liking any videos, no following anyone, nor giving any traceable profile information at all. Sometimes before the young person has admitted that for themselves. TikTok figures that simply by seeing how long the user stares at what: spending much more time on boys' gym videos over girls', or vice versa, is already enough. I think that was used to scare people of how much China can figure about Americans from just app usage?
Well if that scares anyone, how about this: an LLM-backend device can already do much more by just seeing which TV shows you watch and which parts of them give you laugh or which comments you make to the person next to you. Probably doesn't even need to be multimodal: pretty sure subtitles and text-to-speech will already do it. Your desire to oppose the upcoming authoritarian can be figured out even before you admit it to yourself.
While Helen Toner (the author) is worried about democracies on the opposite end of the planet, the stronghold of democracy may as well be nearing the last 2 steps to achieve the first working implementation of Orwellian society:
1. convince everyone to have such device in their home for our own good (in progress)
2. intimidate/seize the owning company to use said devices for not our own good (TODO)
Virtually every "democracy" has a comprehensive camera monitoring system, tap into comm networks, have access to full social graph, whatever you buy,know all your finances, and if you take measures to hide it ... Know that you do that.
Previously the fire hose information being greater than the capability of governments to process it was our saving Grace from TurnKey totalitarianism.
With AI it's right there. Simple button push. And unlike nuclear weapons, it can be activated and no immediate klaxon sounds up. It can be ratcheted up like a slow boil, if they want to be nice.
Oh did I forget something? Oh right. Drones! Drones everywhere.
Oh wait, did I forget ANOTHER thing? Right right, everyone has mobile devices tracking their locations, with remote activatable cameras and microphones.
So ... Yeah.
Trying to forever suppress the middle east obviously hasn't worked, so this is just realpolitik with the obvious right choice being what is being done now imho. Saudis are gonna be autocratic in any case, this is just good Hearts of Iron gameplay in real life.
https://en.wikipedia.org/wiki/Chris_Lehane
Fixer par excellence!
But that is not the issue right now.
Right now, the issue is that the power of ubiquitous surveillance crossed with contemporary compute is unprecedented and new.
LA is burning and the National Guard has been called on protestors over the Governor's objections.
The fate of the nation and by extension the first world is what's at stake here.
And I mean total. A sufficiently advanced algorithm will be able to find everything a person has ever posted online (by cross referencing, writing style, etc.) and determine their views and opinions with high accuracy. It'll be able to extrapolate the evolution of a person's opinions.
The government will be able to target dissidents even before they realize they are dissidents, let alone before they have time to organize.
My first guess would be that it will be a geo-fenced service, in particular UAE residents will have (subsidised) access to it and perhaps not to the global service, and it will have a system prompt designed and tuned in consultation with the UAE government.
There is no deeper altruistic move here. It's capitalism at it's best or worse, depending on the observer.
The essay attempted to mitigate this by noting OAI is nominally a non-profit. But it's clear the actions of the leadership are firmly aligned with traditional capitalism. That's perhaps the only interesting subtly of the issue, but the essay missed this entirely. The omission could not have been intentional, because it provides a complete motivation for item #2.
[1] #2 is 'The US is a democracy and China isn’t, so anything that helps the US “win” the AI “race” is good for democracy.'
I would not do business with Kim Jong Un. He is murdering a lot of his own people. Or with Putin. He is murdering a lot of Ukrainians.
But guess what: both North Korea and Russia are under sanctions. You can't do business with them anyway.
But the UAE is not under sanctions. Which means that in the opinion of the US Government it is ok to do business with them. Then who is Open AI to say otherwise? Why should it be any of their concern to determine who is a good guy or a bad guy in the world? Shouldn't there be a division of responsibilities? Let the Department of State determine who is good and who is bad, and let companies do business with those who are not on the sanctions list.
You have already killed ten millions people and you still do not have enough?
How much bloodthirsty you are?