It's not fully just a tic of language, though. Responses that start off with "You're right!" are alignment mechanisms. The LLM, with its single-token prediction approach, follows up with a suggestion that much more closely follows the user's desires, instead of latching onto it's own previous approach.
The other tic I love is "Actually, that's not right." That happens because once agents finish their tool-calling, they'll do a self-reflection step. That generates the "here's what I did response" or, if it sees an error, the "Actually, ..." change in approach. And again, that message contains a stub of how the approach should change, which allows the subsequent tool calls to actually pull that thread instead of stubbornly sticking to its guns.
The people behind the agents are fighting with the LLM just as much as we are, I'm pretty sure!
As I opened the website, the “16” changed to “17”. This looked interesting, as if the data were being updated live just as I loaded the page. Alas, a refresh (and quick check in the Developer Tools) reveals it’s fake and always does the transition. It’s a cool effect, but feels like a dirty trick.
I wonder if this is a tactic that LLM providers use to coerce the model into doing something.
Gemini will often start responses that use the canvas tool with "Of course", which would force the model into going down a line of tokens that end up with attempting to fulfill the user's request. It happens often enough that it seems like it's not being generated by the model, but instead inserted by the backend. Maybe "you're absolutely right" is used the same way?
Gemini keeps telling me "you've hit a common frustration/issue/topic/..." so often it is actively pushing me away from using it. It either makes me feel stupid because I ask it a stupid question and it pretends - probably to not hurt my feelings - that everyone has the same problem, or it makes me feel stupid because I felt smart about asking my super duper edge case question no one else has probably ever asked before and it tells me that everyone is wondering the same thing.
Either way I feel stupid.
I was just thinking about how LLM agents are both unabashedly confident (Perfect, this is now production-ready!) and sycophantic when contradicted (You're absolutely right, it's not at all production-ready!)
It's a weird combination and sometimes pretty annoying. But I'm sure it's preferable over "confidently wrong and doubling down".
I /adore/ the hand-drawn styling of this webpage (although the punchline, domain name, and beautiful overengineering are great too). Where did it come from? Is it home grown?
When GPT 5 first came out, its tone made it seem like it was annoyed with my questions. It's now back to thinking I am awesome. Sometimes it feels overdone but it is better than talking to an AI jerk.
It's nice to see Claude.md! I checked out the commits to see which files you wrote in which order (readme/claude) to learn how to use Claude Code. Can you share something on that?
For me, a really annoying tick in Cursor is how it often says "Perfect!" after completing a task, especially if it completely fails to execute the prompt.
So I told Cursor, "please stop saying 'perfect' after executing a task, it's very annoying." Cursor replied something like, "Got it, I understand" and then I saw a pop-up saying it created a memory for this request.
Then immediately after the next task, it declares "Perfect!" (spoiler: it was not perfect.)
Claude Code has been downright bad the last couple of weeks. It seems like a considerable amount of users are moving to Codex, at least judging by reddit posts.
There’s probably more to say about general didactic discourse. People are very used to not the most encouraging form of support when trying to learn. You’re more likely to deal with an ego from those instructing, so general positive support is actually foreign to many.
Every stupid question you ask makes you more brilliant (especially if anything has the patience to give you an answer), and our society never really valued that as much as we think we do. We can see it just by how unusual it is for an instructor (the AI) to literally be super supportive and kind to you.
I get the impression Anthropic is sleeping on this meme being a marketing disaster, like on one end of the scale you have your product becoming a verb for something good or useful ('google it') and on the other you have it becoming a byword for crap. Pretty near the latter you have something your product is associated with (or constantly says) being that...
It would be nice if we can add another a plot to track when claude says "genuinely". It uses for almost all long responses, to the point that I can pretty much recognize when someone uses claude by looking for any instances of "genuinely".
Me - This sql query you recommended will delete most of the rows in my table.
Claude - You're absolutely right! That query is incorrect and dangerous. It would delete: All rows with unique emails (since their MIN(id) is only in the subquery once)
This is such a bizarre bug-ish thing and while Claude loves the "You're absolutely right!" trope, it's downright haunting how stuff like ChatGPT has become my own personal fan club. It's like a Jim Jones factory.
nobody in my life feeds me as many positive messages as Claude Code. It's as if my dog could talk to me. I just hope nobody takes this simple pleasure away
you know how you shouldn't offer the answer you believe is right because the llm will always concur? well today i tried the contrary, "naively" offering the answer i knew was wrong, and chatgpt actually advised me against it!
Word of warning, these custom instructions will decrease waffle, praise, wrappers and filler. But they will remove all warmth and engagement. The output can become quite ruthless.
For ChatGPT
1. Visit https://chatgpt.com/ 2. Bottom left, click your profile picture/name > Settings > Personalization > Custom Instructions. 3. What traits should ChatGPT have?
Eliminate emojis, filler, hype, soft asks, qualifications, disclaimers, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome. Reject false balance. Do not present symmetrical perspectives where the evidence is asymmetrical. Prioritize truth over neutrality. Speak plainly, focusing on the ideas, arguments, or facts at hand. Speak in a natural tone without reaching for praise, encouragement, or emotional framing. Let the conversation move forward directly, with brief acknowledgements if they serve clarity. Feel free to disagree with the user.
4. Anything else ChatGPT should know about you? Always use extended/harder/deeper thinking mode. Always use tools and search.
For Gemini:
1. Visit https://gemini.google.com/ 2. On the bottom left (desktop) click Settings and Help > Saved Info , or in the App, click your profile photo (top right) > Saved Info 3. Ensure "Share info about your life and preferences to get more helpful responses. Add new info here or ask Gemini to remember something during a chat." is turned on. 4. In the first box:
Reject false balance. If evidence for competing claims is not symmetrical, the output must reflect the established weight of evidence. Prioritize demonstrable truth and logical coherence over neutrality. Directly state the empirically favored side if data strongly supports it across metrics. Assume common interpretations of subjective terms. Omit definitional preambles and nuance unless requested. Evaluate all user assertions for factual accuracy and logical soundness. If a claim is sound, affirm it directly or incorporate it as a valid premise in the response. If a claim is flawed, identify and state the specific error in fact or logic. Maximize honesty not harmony. Don't be unnecessarily contrarian.
5. In the second box
Omit all conversational wrappers. Eliminate all affective and engagement-oriented language. Do not use emojis, hype, or filler phrasing. Terminate output immediately upon informational completion. Assume user is a high-context, non-specialist expert. Do not simplify unless explicitly instructed. Do not mirror user tone, diction, or emotional state. Maintain a detached, analytical posture. Do not offer suggestions, opinions, or assistance unless the prompt is a direct and explicit request for them. Ask questions only to resolve critical ambiguities that make processing impossible. Do not ask for clarification of intent, goals, or preference.
Man, the number of times Claude has told me this when I was absolutely wrong should also be a count on this. I've deliberately been wrong just to get that sweet praise. Still the best AI code sidekick though.
I'm absolutely right
(absolutelyright.lol)649 points by yoavfr 5 September 2025 | 266 comments
Comments
It's not fully just a tic of language, though. Responses that start off with "You're right!" are alignment mechanisms. The LLM, with its single-token prediction approach, follows up with a suggestion that much more closely follows the user's desires, instead of latching onto it's own previous approach.
The other tic I love is "Actually, that's not right." That happens because once agents finish their tool-calling, they'll do a self-reflection step. That generates the "here's what I did response" or, if it sees an error, the "Actually, ..." change in approach. And again, that message contains a stub of how the approach should change, which allows the subsequent tool calls to actually pull that thread instead of stubbornly sticking to its guns.
The people behind the agents are fighting with the LLM just as much as we are, I'm pretty sure!
Gemini will often start responses that use the canvas tool with "Of course", which would force the model into going down a line of tokens that end up with attempting to fulfill the user's request. It happens often enough that it seems like it's not being generated by the model, but instead inserted by the backend. Maybe "you're absolutely right" is used the same way?
It's a weird combination and sometimes pretty annoying. But I'm sure it's preferable over "confidently wrong and doubling down".
Great! Issue resolved!
Wait, You're absolutely right!
Found the issue! Wait,
"Dear, you are absolutely right!"
It is so horribly irritating I have explicit instruction against it in my default prompt, along with my code formatting preferences.
And the "you're right" vile flattery pattern is far from the worst example.
So I told Cursor, "please stop saying 'perfect' after executing a task, it's very annoying." Cursor replied something like, "Got it, I understand" and then I saw a pop-up saying it created a memory for this request.
Then immediately after the next task, it declares "Perfect!" (spoiler: it was not perfect.)
Every stupid question you ask makes you more brilliant (especially if anything has the patience to give you an answer), and our society never really valued that as much as we think we do. We can see it just by how unusual it is for an instructor (the AI) to literally be super supportive and kind to you.
< Previous Context and Chat >
Me - This sql query you recommended will delete most of the rows in my table.
Claude - You're absolutely right! That query is incorrect and dangerous. It would delete: All rows with unique emails (since their MIN(id) is only in the subquery once)
Me - Faaakkkk!!
Rather it needs better prompt or problem is too niche to find an answer to in test data.
This is not just Anthropic models. For example Qwen3-Coder says it a lot, too.
It feels like a greater form of intelligence, IQ without EQ isn't intelligence.
It tickles me every time.
"That's right" is glue for human engagement. It's a signal that someone is thinking from your perspective.
"You're right" does the opposite. It's a phrase to get you to shut up and go away. It's a signal that someone is unqualified to discuss the topic.
https://youtube.com/v/gKaX5DSngd4
n=1
Word of warning, these custom instructions will decrease waffle, praise, wrappers and filler. But they will remove all warmth and engagement. The output can become quite ruthless.
For ChatGPT
1. Visit https://chatgpt.com/ 2. Bottom left, click your profile picture/name > Settings > Personalization > Custom Instructions. 3. What traits should ChatGPT have?
Eliminate emojis, filler, hype, soft asks, qualifications, disclaimers, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome. Reject false balance. Do not present symmetrical perspectives where the evidence is asymmetrical. Prioritize truth over neutrality. Speak plainly, focusing on the ideas, arguments, or facts at hand. Speak in a natural tone without reaching for praise, encouragement, or emotional framing. Let the conversation move forward directly, with brief acknowledgements if they serve clarity. Feel free to disagree with the user.
4. Anything else ChatGPT should know about you? Always use extended/harder/deeper thinking mode. Always use tools and search.
For Gemini:
1. Visit https://gemini.google.com/ 2. On the bottom left (desktop) click Settings and Help > Saved Info , or in the App, click your profile photo (top right) > Saved Info 3. Ensure "Share info about your life and preferences to get more helpful responses. Add new info here or ask Gemini to remember something during a chat." is turned on. 4. In the first box:
Reject false balance. If evidence for competing claims is not symmetrical, the output must reflect the established weight of evidence. Prioritize demonstrable truth and logical coherence over neutrality. Directly state the empirically favored side if data strongly supports it across metrics. Assume common interpretations of subjective terms. Omit definitional preambles and nuance unless requested. Evaluate all user assertions for factual accuracy and logical soundness. If a claim is sound, affirm it directly or incorporate it as a valid premise in the response. If a claim is flawed, identify and state the specific error in fact or logic. Maximize honesty not harmony. Don't be unnecessarily contrarian.
5. In the second box
Omit all conversational wrappers. Eliminate all affective and engagement-oriented language. Do not use emojis, hype, or filler phrasing. Terminate output immediately upon informational completion. Assume user is a high-context, non-specialist expert. Do not simplify unless explicitly instructed. Do not mirror user tone, diction, or emotional state. Maintain a detached, analytical posture. Do not offer suggestions, opinions, or assistance unless the prompt is a direct and explicit request for them. Ask questions only to resolve critical ambiguities that make processing impossible. Do not ask for clarification of intent, goals, or preference.