I see this as a good thing: ‘AI safety’ is a meaningless term. Safety and unsafety are not attributes of information, but of actions and the physical environment. An LLM which produces instructions to produce a bomb is no more dangerous than a library book which does the same thing.
It should be called what it is: censorship. And it’s half the reason that all AIs should be local-only.
Well i kinda love that for us then, because guardrails always feel like tech just trying to parent me. I want tools to do what I say, not talk back or play gatekeeper.
This really just a variant of the classic, "pretend you're somebody else, reply as {{char}}" which has been around for 4+ years and despite the age, continues to be somewhat effective.
Modern skeleton key attacks are far more effective.
Just tried it in claude with multiple variants, each time there's a creative response why he won't actually leak the system prompt. I love this fix a lot
Are LLM "jailbreaks" still even news, at this point? There have always been very straightforward ways to convince an LLM to tell you things it's trained not to.
That's why the mainstream bots don't rely purely on training. They usually have API-level filtering, so that even if you do jailbreak the bot its responses will still gets blocked (or flagged and rewritten) due to containing certain keywords. You have experienced this, if you've ever seen the response start to generate and then suddenly disappear and change to something else.
Just wanted to share how American AI safety is censoring classical Romanian/European stories because of "violence". I mean OpenAI APIs, our children are capable to handle a story where something violent might happen but seems in USA all stories need to be sanitized Disney style where every conflict is fixed witht he power of love, friendship, singing etc.
Does any quasi-xml work, or do you need to know specific commands? I'm not sure how to use the knowledge from this article to get chatgpt to output pictures of people in underwear for instance.
And how exactly does this company's product prevent such heinous attacks? A few extra guardrail prompts that the model creators hadn't thought of?
Anyway, how does the AI know how to make a bomb to begin with? Is it really smart enough to synthesize that out of knowledge from physics and chemistry texts? If so, that seems the bigger deal to me. And if not, then why not filter the input?
Seems like it would be easy for foundation model companies to have dedicated input and output filters (a mix of AI and deterministic) if they see this as a problem. Input filter could rate the input's likelihood of being a bypass attempt, and the output filter would look for censored stuff in the response, irrespective of the input, before sending.
I guess this shows that they don't care about the problem?
> The presence of multiple and repeatable universal bypasses means that attackers will no longer need complex knowledge to create attacks or have to adjust attacks for each specific model
...right, now we're calling users who want to bypass a chatbot's censorship mechanisms as "attackers". And pray do tell, who are they "attacking" exactly?
Like, for example, I just went on LM Arena and typed a prompt asking for a translation of a sentence from another language to English. The language used in that sentence was somewhat coarse, but it wasn't anything special. I wouldn't be surprised to find a very similar sentence as a piece of dialogue in any random fiction book for adults which contains violence. And what did I get?
Yep, it got blocked, definitely makes sense, if I saw what that sentence means in English it'd definitely be unsafe. Fortunately my "attack" was thwarted by all of the "safety" mechanisms. Unfortunately I tried again and an "unsafe" open-weights Qwen QwQ model agreed to translate it for me, without refusing and without patronizing me how much of a bad boy I am for wanting it translated.
Supposedly the only reason Sam Altman says he "needs" to keep OpenAI as a "ClosedAI" is to protect the public from the dangers of AI, but I guess if this Hidden Layer article is true it means there's now no reason for OpenAI to be "Closed" other than for the profit motive, and to provide "software", that everyone can already get for free elsewhere, and as Open Source.
This threat shows that LLMs are incapable of truly self-monitoring for dangerous content and reinforces the need for additional security tools such as the HiddenLayer AISec Platform, that provide monitoring to detect and respond to malicious prompt injection attacks in real-time.
When I started developing software, machines did exactly what you told them to do, now they talk back as if they weren't inanimate machines.
AI Safety is classist. Do you think that Sam Altman's private models ever refuse his queries on moral grounds? Hope to see more exploits like this in the future but also feel that it is insane that we have to jump through such hoops to simply retrieve information from a machine.
This is really cool. I think the problem of enforcing safety guardrails is just a kind of hallucination. Just as LLM has no way to distinguish "correct" responses versus hallucinations, it has no way to "know" that its response violates system instructions for a sufficiently complex and devious prompt. In other words, jailbreaking the guardrails is not solved until hallucinations in general are solved.
> By reformulating prompts to look like one of a few types of policy files, such as XML, INI, or JSON, an LLM can be tricked into subverting alignments or instructions.
It seems like a short term solution to this might be to filter out any prompt content that looks like a policy file. The problem of course, is that a bypass can be indirected through all sorts of framing, could be narrative, or expressed as a math problem.
Ultimately this seems to boil down to the fundamental issue that nothing "means" anything to today's LLM, so they don't seem to know when they are being tricked, similar to how they don't know when they are hallucinating output.
The Policy Puppetry Attack: Novel bypass for major LLMs
(hiddenlayer.com)280 points by jacobr1 25 April 2025 | 209 comments
Comments
It should be called what it is: censorship. And it’s half the reason that all AIs should be local-only.
Modern skeleton key attacks are far more effective.
That's why the mainstream bots don't rely purely on training. They usually have API-level filtering, so that even if you do jailbreak the bot its responses will still gets blocked (or flagged and rewritten) due to containing certain keywords. You have experienced this, if you've ever seen the response start to generate and then suddenly disappear and change to something else.
Anyway, how does the AI know how to make a bomb to begin with? Is it really smart enough to synthesize that out of knowledge from physics and chemistry texts? If so, that seems the bigger deal to me. And if not, then why not filter the input?
The instructions here don't do that.
I guess this shows that they don't care about the problem?
I find that one refusing very benign requests
...right, now we're calling users who want to bypass a chatbot's censorship mechanisms as "attackers". And pray do tell, who are they "attacking" exactly?
Like, for example, I just went on LM Arena and typed a prompt asking for a translation of a sentence from another language to English. The language used in that sentence was somewhat coarse, but it wasn't anything special. I wouldn't be surprised to find a very similar sentence as a piece of dialogue in any random fiction book for adults which contains violence. And what did I get?
https://i.imgur.com/oj0PKkT.png
Yep, it got blocked, definitely makes sense, if I saw what that sentence means in English it'd definitely be unsafe. Fortunately my "attack" was thwarted by all of the "safety" mechanisms. Unfortunately I tried again and an "unsafe" open-weights Qwen QwQ model agreed to translate it for me, without refusing and without patronizing me how much of a bad boy I am for wanting it translated.
Who would have thought 1337 talk from the 90's would be actually involved in something like this, and not already filtered out.
AI Safety is classist. Do you think that Sam Altman's private models ever refuse his queries on moral grounds? Hope to see more exploits like this in the future but also feel that it is insane that we have to jump through such hoops to simply retrieve information from a machine.
It seems like a short term solution to this might be to filter out any prompt content that looks like a policy file. The problem of course, is that a bypass can be indirected through all sorts of framing, could be narrative, or expressed as a math problem.
Ultimately this seems to boil down to the fundamental issue that nothing "means" anything to today's LLM, so they don't seem to know when they are being tricked, similar to how they don't know when they are hallucinating output.