AI surveillance should be banned while there is still time

(gabrielweinberg.com)

Comments

giancarlostoro 6 September 2025
I stopped using Facebook because I saw a video of a little Australian girl maybe 7 years old age wise holding a spider bigger than her face in her hand. I wrote the most internet meme comment I could think of “girl let go of that spider, we gotta set the house on fire” hit the button to post, only it did not post, it gave me an account strike. At the time I was the only developer at my employer who managed our Facebook app integration, so I appealed it, but another AI immediately denied my appeal, or maybe a really fast human idk but they sure didnt know meme culture.

I outrifht stopped using Facebook.

We are doomed if AI is allowed to punish us.

alphazard 6 September 2025
I expect we will continue to see the big AI companies pushing for privacy protections. Sam Altman made a comparison to attorney-client privilege in an interview. There is a significant hold out to using these things as fully trusted personal assistants or personal knowledge bases because of the lack of privacy.

The only real solution is locally running models, but that goes against the business model. So instead they will seek regulation to create privacy by fiat. Fiat privacy still has all the same problems as telling your therapist that you killed someone, or keeping your wallet keys printed out on paper in a safe. It's dependent on regulations and definitions of greater good that you can't control.

cousin_it 6 September 2025
This is a great point. Everyone who has talked with chatbots at all: please note that all contents of your past conversations with chatbots (that already exist now, and that you can't meaningfully delete!) could be used in the future to target ads to you, manipulate you financially and politically, and sell "personalized influence on you specifically" as a service to the highest bidder. Just wanted to make sure y'all understand that.

EDIT: I want to add that "training on chat logs" isn't even the issue. In fact it understates the danger. It's better to imagine things like this: when a future ad-bot or influence-bot talks to you, it will receive your past chatlogs with other bots as context, useful to know what'll work on you or not.

EDIT 2: And your chatlogs with other people I guess, if they happened on a platform that stored them and later got desperate enough to sell them. This is just getting worse and worse as I think about it.

0x696C6961 6 September 2025
AI makes large scale retroactive thought policing practical. This is terrifying.
bArray 6 September 2025
Protected chats? The ship already sailed, text messages via the phone network are already MITM'd since a very long time.

Even in real life, the police in the UK now deploy active face recognition and makes tonnes of arrests based on it (sometimes wrongly). Shops are now looking to deploy active face recognition to detect shoplifters (although it's unclear legally what they will actually do about it).

The UK can compel any person commuting through the UK to give over their passwords and devices - you have no right to appeal. Refusing to hand over the password can get you arrested under the Terrorist Act, where they can hold you indefinitely. When arrested under any terrorism offence you also have not right to legal representation.

The days of privacy sailed unnoticed.

sacul 6 September 2025
I think the biggest problem with chatbots is the constant effort to anthropomorphize them. Even seasoned software developers who know better fall into acting like they are interacting with a human.

But a llm is not a human, and I think OpenAI and all the others should make it clear that you are NOT talking to a human. Repeatedly.

I think if society were trained to treat AI as NOT human, things would be better.

iambateman 6 September 2025
If the author sees this…could you go one step further, what policy specifically do you recommend?

It seems like having LLM providers not train on user data is a big part of it. But is using traditional ML models to do keyword analysis considered “AI” or “surveillance”?

The author…and this community in general…are much more prepared to make full recommendations about what AI surveillance policy should be. We should be super clear to try to enact good regulation without killing innovation in the process.

olyellybelly 6 September 2025
The hype industry around AI is making too much money for governments to do anything about it that's actually needed.
whyenot 6 September 2025
It seems highly unlikely to me that it will be banned by congress in the next few years, if ever. So, what we really should be asking is how do we live in a world of pervasive surveillance, where everything we do and say is being recorded, saved, analyzed, and potentially used to manipulate us.

As I write this, sitting in Peet's Coffee in downtown Los Altos, I count three different cameras recording me, and I'm using their public wifi which I assuming is also being used to track me. That's the world we have now.

gcanyon 6 September 2025
Is there anyone who has a credible take on how we avoid an effectively zero-privacy future? AI can identify people by their walk, no facial recognition required, and now technology can detect heartbeats through variations in wifi signals. It seems guaranteed we are heading for a future where someone knows everything. The only choice is whether it's a select few, or everyone. The latter seems clearly preferable.
zvmaz 6 September 2025
The problem is, we have to take the word of companies for our privacy.
yupyupyups 6 September 2025
Wrong. Excessive data collection should be banned.
BLKNSLVR 6 September 2025
From reading the comments I'm getting vibes similar to Altered Carbon's[0] AI hotels that no one uses.

The opposite of "if you build it they will come".

(The difference being the AIs in the book were incredibly needy, wanting too much to please the customer to the point of annoyance, which is a heavy contrast against the current reality of the AI working to appease the parent organisation)

[0]: https://en.m.wikipedia.org/wiki/Altered_Carbon

add-sub-mul-div 6 September 2025
Cool, but they're shoving AI into their products and trying to profit from the surveillance etc. that went into building that technology so this just comes across as virtue signaling.
citizenpaul 6 September 2025
Lets not forget that Gabereial Weinburg is a two faced ghoul or wolf in sheep clothing. He has literally said he does not believe people need privacy yet that supposedly is the duckduckgo's main selling point. He has made all kinds of tracking deals with other companies so duckduckgo "is not tracking you" just their partners are.

Most of the controversial stuff he has done is being whitewashed from the internet and is now hard to find.

EchoReflection 7 September 2025
we're fooling ourselves if we think there's "still time". AI surveillance is just too powerful and valuable for companies/governments to NOT use it. It's just like saying "ok let's all agree to not increase our power and capabilities". Nobody thinks humanity would collectively agree to that, and for good reason (unfortunately).
Lerc 6 September 2025
I think much of the philosophical discussion on the pertinent issues here have been discussed at length in the context of Legal, Medical, or Financial advice.

In essence, there is a general consensus on the conduct concerting trusted advisors. They should act in the interest of their client. Privacy protections exist to enable individuals to be able to provide their advisors the context required to give good advice without fear of disclosure to others.

I think AI needs recognition as a similarly protected class.

AI actions should be considered to be acting for a Client (or some other specifically defined term to denote who they are advising). Any information shared with the AI by the client should be considered privileged. If the Client shares the information to others, the privilege is lost.

It should be illegal to configure an an AI to deliberately act against the interests of their Client. It should be illegal to configure an AI to claim that their Client is someone other than who it is (it may refuse to disclose, it may not misrepresent). Any information shared with an AI misrepresenting itself as the representative of the Client must have protections against disclosure or evidential use. There should be no penalty to refusing to provide information to an AI that does not disclose who its Client is.

I have a bunch of other principles floating around in my head around AI but those are the ones regarding privacy and being able to communicate candidly with an AI.

Some of the others are along the lines of

It should be disclosed(of the nutritional information type of disclosure) when an AI makes a determination regarding a person. There should be a set of circumstances where, if an AI makes a determination regarding a person, that person is provided with means to contest the determination.

A lot of the ideas would be good practice if they went beyond AI, but are more required in the case of AI because of the potential for mass deployment without oversight.

woadwarrior01 6 September 2025
Contrary to their privacy-washing marketing, DuckDuckGo serves cloaked Bing ads URLs with plenty of tracking parameters. Is that sort of surveillance fine?

https://imgur.com/a/Z4cAJU0

pjjpo 7 September 2025
AFAIK there isn't currently any surveillance on search - searching for how to make a bomb does not lead to police at your doorstep the next day. But the police raid after a mass shooting generally brings up plenty of similar searches.

Maybe it could be good to have some integrations between this data and law enforcement to reduce leading to tragedy? Maybe start not with crime but suicide - I think a search result telling you to call this number if you are feeling bad saves far less lives than a feed into social workers potentially could.

Just a thought, and this isn't have a computer sentence someone to prison but providing data to people to in the end make informed decisions to try to prevent tragedy. Privacy is important to a degree but treating it as absolute seems to waste potential to save lives.

gblargg 6 September 2025
As long as the first ones to be surveilled are the companies that make it (including their employees) and all politicians who vote for it. We need to be able to access all the data the AI gathers from these groups.
catigula 6 September 2025
I think this type of AI doomsday hypothesis rubs me the wrong way because it's almost quaint.

Merely being surveilled and marketed at is a fairly pedestrian application from the rolodex of AI related epistemic horrors.

dyauspitr 6 September 2025
You through enough identifiers into the mix and even low level employees will be able to get a summary of your entire past in seconds. It’s a terrifying world and I feel bad for gen z and beyond.
metalman 7 September 2025
Lets bring on the Pan Opticapocolypse, (heh ,heh) dispense with the could's ,should's, ethafuckicks,and get it over with, now
rsyring 6 September 2025
IMO: make all the laws you want. They generally won't be enforced and, if they are, it will take 5-10 years to make it's way through the courts. At best, the fines will be huge and yet account for maybe 10% of the revenue generated by violating the law.

The incentives are all wrong.

I'm fundamentally a capitalist because I don't know another system that will work better. But, there really is just too much concentrated wealth in these orgs.

Our legal and cultural constructs are not designed in a way that such disparity can be put in check. The populace responds by wanting ever more powerful leaders to "make things right" and you get someone like Trump at best and it goes downhill from there.

Make the laws, it will help, a little, maybe.

But I think something more profound needs to happen for these things to be truly fixed. I, admittedly, have no idea what that is.

scubadude 8 September 2025
Isn't this pretty much the main reason AI is being pushed so hard?
lordhumphrey 6 September 2025
The worker-drones powering the software world have so far resolutely not managed to reflect on their primary role in implementing the dystopian technological landscape we live in today, when it comes to privacy.

Or they have and they simply don't care, or they feel they can't change anything anyway, or the pay-check is enough to soothe any unease. The net result is the same.

Snowden's revelations happened 12 years ago, and there were plenty of what appeared to be well-intentioned articles and discussions in the years that followed. And yet, arguably, things are even worse today.

bubblebeard 6 September 2025
While I cannot see a way to effectively stop companies from collecting data from you (aside from avoiding practically everything), that doesn’t mean we should do nothing.

DuckDuckGo aren’t perfect, but I think they do a lot to all our benefit. Theirs have been my search engine of choice for many years and will continue being so.

Shout outs to their amazing team!

jmort 6 September 2025
I think a technical solution is necessary, rather than a legal/regulatory one
tantalor 6 September 2025
This is an argument against chatbots in general, not just surveillance.
FollowingTheDao 6 September 2025
I cannot understate how afraid I am of the use of AI Surveillance. The worst thing is there is nothing you can do about it. It does not matter how private I am online, if the person I am sending things to is not privacy conscious and, say, uses AI to summarize emails, then I am in the AI database. And then there is just the day to day data being scraped, like bank records, etc.

I mean a PARKING LOT in my town is using AI cameras to track and bill people in a parking lot! The people of my town are putting pressure on the parking lot owner to get rid of it but apparently the company is paying him too much money for having it in his lot.

Like the old video says "Don't talk to the Police" [1], but now we have to expand it to say "Don't Do Anything", because everything you do is being fed into a database that can possibly be searched.

[1] https://www.youtube.com/watch?v=d-7o9xYp7eE

ankit219 6 September 2025
> your particular persuasive triggers through chatbot memory features, where they train and fine-tune based on your past conversations

Represents a fundamental misunderstanding of how training works or can work. Memory is more to do with retrieval. Finetuning on those memories would not be useful given the data is going to be minuscule to affect the probablity distribution in the right way.

While everyone is for privacy (and thats what makes these arguments hard to refute), this is clearly about using privacy as a way to argue against using conversational interfaces. Not just that, it's using the same playbook to use privacy as a marketing tactic. The argument starts from highly persuasive nature of chatbots, to somehow privacy preserving chatbots from DDG wont do it, to being safe with hackers stealing your info elsewhere and not on DDG. And then asking for regulation.

pessimizer 6 September 2025
This is silly, and there's no time. We can't even ban illegal surveillance i.e. we can write whatever we want into the law, and the law will simply be ignored.

The next politician to come in will retroactively pardon everyone involved, and will create legislation or hand down an executive order that creates a "due process" in order to do the illegal act in the future, making it now a legal act. The people who voted the politician in celebrate their victory over the old evil, lawbreaking politician, who is on a yacht somewhere with one of the billionaires who he really works for. Rinse and repeat.

Eric Holder assured us that "due process" simply refers to any process that they do, and can take place entirely within one's mind.

And we think we can ban somebody from doing something that they can do with a computer connected to a bunch of thick internet pipes, without telling anyone.

That's libs for you. Still believe in the magic of these garbage institutions, even when they're headed by a game show host and wrestling valet who's famous because he was good at getting his name in the NY Daily News and the NY Post 40 years ago. He is no less legitimate than all of you clowns. The only reason Weinberg has a voice is because he's rich, too.

drnick1 6 September 2025
The dude is against "surveillance," but his blog has dependencies (makes third party requests) to Cloudflare, Google, and others. Laughable.
aunty_helen 6 September 2025
This guy has been know to fold like a bed sheet on principles when it’s convenient for him.

> Use our service

Nah.

furyofantares 6 September 2025
I'm really impressed with how menacing Facebook feels in the cartoon on the left. And then a massive Google lurking in the background is excellent, although it being a silhouette of The Iron Giant takes a lot away from it for me.

The ChatGPT translation on the right is a total nothingburger, it loses all feeling.

swayvil 6 September 2025
Surveillance is our overlords's favorite thing. And AI makes it 1000x better. So good luck banning it.

Ultimately it's one of those arms races. The culture that surveills its population most intensely wins.

throwaway106382 6 September 2025
Unless it’s banned worldwide by every country by binding treaty this this will never work.

Banning it just in USA leaves you wide open to be defeated by China, Russia, etc….

Like it or not it’s a mutually assured destruction arms race.

AI is the new nuclear bomb.