It's similarly insulting to read your AI-generated pull request. If I see another "dart-on-target" emoji...
You're telling me I need to use 100% of my brain, reasoning power, and time to go over your code, but you didn't feel the need to hold yourself to the same standard?
I personally don’t think I care if a blog post is AI generated or not. The only thing that matters to me is the content. I use ChatGPT to learn about a variety of different things, so if someone came up with an interesting set of prompts and follow ups and shared a summary of the research ChatGPT did, it could be meaningful content to me.
> No, don't use it to fix your grammar, or for translations, or for whatever else you think you are incapable of doing. Make the mistake. Feel embarrassed. Learn from it. Why? Because that's what makes us human!
It would be more human to handwrite your blog post instead. I don’t see how this is a good argument. The use of tools to help with writing and communication should make it easier to convey your thoughts, and that itself is valuable.
I don't like binary takes on this. I think the best question to ask is whether you own the output of your editing process. Why does this article exist? Does it represent your unique perspective? Is this you at your best, trying to share your insights with the world?
If yes, there's probably value in putting it out. I don't care if you used paper and ink, a text editor, a spell checker, or asked an LLM for help.
On the flip side, if anyone could've asked an LLM for the exact same text, and if you're outsourcing a critical thinking to the reader - then yeah, I think you deserve scorn. It's no different from content-farmed SEO spam.
Mind you, I'm what you'd call an old-school content creator. It would be an understatement to say I'm conflicted about gen AI. But I also feel that this is the most principled way to make demands of others: I have no problem getting angry at people for wasting my time or polluting the internet, but I don't think I can get angry at them for producing useful content the wrong way.
I just hit the back button as soon as my "this feels like AI" sense tingles.
Now you could argue but you don't know it was AI it could just be really mediocre writing - it could indeed but I hit the back button there as well so it's a wash either way.
Recently I had to give one of my vendors a dressing down about LLM use in emails. He was sending me these ridiculous emails where the LLM was going off the rails suggesting all sorts of features etc that were exploding the scope of the project. I told him he needs to just send the bullet notes next time instead of pasting those into ChatGPT and pasting the output into an email.
I don't get all this complaining, TBH. I have been blogging for over 25 years (20+ on the same site), been using em dashes ever since I switched to a Mac (and because the Markdown parser I use converts double dashes to it, which I quite like when I'm banging out text in vim), and have made it a point of running long-form posts through an LLM asking it to critique my text for readability because I have a tendency for very long sentences/passages.
AI is a tool to help you _finish_ stuff, like a wood sander. It's not something you should use as a hacksaw, or as a hammer. As long as you are writing with your own voice, it's just better autocorrect.
I think it is too late. There is non zero profit of people visiting your content, and there is close to zero cost to make it. It is the same problem with music, in fact I search youtube music only with before:2022.
I used to fight against it, I thought we should do "proof of humanity", or create rings of trust for humans, but now I think the ship has sailed.
Today a colleague was sharing their screen on google docs and a big "USE GEMINI AI TO WRITE THE DOCUMENT" button was front and center. I am fairly certain that by end of year most words you read will be tokens.
I am working towards moving my pi-hole from blacklist to whitelist, and after that just using local indexes with some datahorading. (squid, wikipedia, SO, rfcs, libc, kernel.git etc)
Maybe in the future we just exchange local copies of our local "internet" via sdcards, like in Cuba's Sneakernet[1] El Paquete Semenal[2].
I don't like reading content that has not been generated with care. The use of LLMs is largely orthogonal to that. If a non-native English speaker uses an LLM to craft a response so I can consume it, that's great. As long as there is care, I don't mind the source.
People at work have fed me obviously AI generated documentation and blogposts. I've gotten to the point where I can make fairly accurate guesses as to which model generated it. I've started to just reject them because the alternative is getting told to rewrite them to "not look AI".
I don’t know. As a neurodivergent person I have been insulted for my entire life for lacking “communication skills” so I’m glad there is something for levelling the playing field.
It is similarly unsulting to read an ungrammatical blog post full of misspelings. So I do not subscribe to the part of your argument "No, don't use it to fix your grammar". Using AI to fix your grammar, if done right, is the part of the learning process.
It feels great to use. But it also feels incredibly shitty to have it used on you.
My recommendation. Just give the prompt. If if your readers want to expand it they can do so. don't pollute others experience by passing the expanded form around. Nobody enjoys that.
No, don't use it to fix your grammar, or for translations, or for whatever else you think you are incapable of doing. Make the mistake. Feel embarrassed. Learn from it. Why? Because that's what makes us human!
I do understand the reasoning behind being original, but why make mistakes when we have tools to avoid them? That sounds like a strange recommendation.
I do like it for taking the hour long audio/video and creating a summary that, even if poorly written, can indicate to me wether I'd like to listen to the hour of media.
This is unavoidable. Individual blogs may not use AI but companies that live on user engagement will absolutely use them and churn out all types of content
> No, don't use it to fix your grammar, or for translations, or for whatever else you think you are incapable of doing. Make the mistake. Feel embarrassed. Learn from it. Why? Because that's what makes us human!
For essays, honestly, I do not feel so bad, because I can see that other than some spaces like HN the quality of the average online writer has dropped so much that I prefer to have some machine-assisted text that can deliver the content.
However, my problem is with AI-generated code.
In most of the cases to create trivial apps, I think AI-generated code will be OK to good; however, the issue that I'm seeing as a code reviewer is that folks that you know their code design style are so heavily reliant on AI-generated code that you are sure that they did not write and do not understand the code.
One example: Working with some data scientists and researchers, most of them used to write things on Pandas, some trivial for loops, and some primitive imperative programming. Now, especially after Claude Code, most of the things are vectorized, with some sort of variable naming with way-compressed naming. Sometimes folks use Cython in some data pipeline tasks or even using functional programming to an extreme.
Good performance is great, and leveling up the quality of the codebase it's a net positive; however, I wonder in some scenario when things go south and/or Claude code is not available if those folks will be able to fix it.
It appears inconsiderate—perhaps even dismissive—to present me, a human being with unique thoughts, humor, contradictions, and experiences, with content that reads as though it were assembled by a lexical randomizer. When you rely on automation instead of your own creativity, you deny both of us the richness of genuine human expression.
Isn’t there pride in creating something that is authentically yours? In writing, even imperfectly, and knowing the result carries your voice? That pride is irreplaceable.
Please, do not use artificial systems merely to correct your grammar, translate your ideas, or “improve” what you believe you cannot. Make errors. Feel discomfort. Learn from those experiences. That is, in essence, the human condition.
Human beings are inherently empathetic. We want to help one another. But when you interpose a sterile, mechanized intermediary between yourself and your readers, you block that natural empathy.
Here’s something to remember: most people genuinely want you to succeed. Fear often stops you from seeking help, convincing you that competence means solitude. It doesn’t. Intelligent people know when to ask, when to listen, and when to contribute. They build meaningful, reciprocal relationships.
So, from one human to another—from one consciousness of love, fear, humor, and curiosity to another—I ask: if you must use AI, keep it to the quantitative, to the mundane. Let your thoughts meet the world unfiltered. Let them be challenged, shaped, and strengthened by experience.
After all, the truest ideas are not the ones perfectly written. They’re the ones that have been felt.
Lately, I've been writing more on my blog, and it's been helpful to change the way that I do it.
Now, I take a cue from school, and write the outline first. With an outline, I can use a prompt for the LLM to play the role of a development editor to help me critique the throughline. This is helpful because I tend to meander, if I'm thinking at the level of words and sentences, rather than at the level of an outline.
Once I've edited the outline for a compelling throughline, I can then type out the full essay in my own voice. I've found it much easier to separate the process into these two stages.
Earlier this year, I used AI to help me improve some of my writing on my blog. It just has a better way of phrasing ideas than me. But when I came back to read those same blog posts a couple months later, you know after I've encountered a lot more blog posts that I didn't know were AI generated at the time, I saw the pattern. It sounds like the exact same author, +- some degree of obligatory humor, writing all over the web with the same voice.
I've found a better approach to using AI for writing. First, if I don't bother writing it, why should you bother reading it? LLMs can be great soundboards. Treat them as teachers, not assistants. Your teacher is not gonna write your essay for you, but he will teach you how to write, and spot the parts that need clarification. I will share my process in the coming days, hopefully it will get some traction.
I recently interviewed a person for a role as senior platform architect. The person was already working for a semi reputable company. In the first interview, the conversation was okay but my gut just told me something was strange about this person.
We have the candidate a case to solve with a few diagrams, and to prepare a couple slides to discuss the architecture.
The person came back with 12 diagrams, all AI generated, littered with obvious AI “spelling”/generation mistakes.
And when we questioned the person about why they think we would gain trust and confidence in them with this obvious AI generated content, they became even aggressive.
Needless to say it didn’t end well.
The core problem is really how much time is now being wasted in recruiting with people who “cheat” or outright cheat.
We have had to design questions to counter AI cheating, and strategies to avoid wasting time.
I feel like sometimes I write like an LLM, complete with [bad] self-deprecating humor, overly-explained points because I like first principals, random soliloquies, etc. Makes me worry that I'll try and change my style.
That said, when I do try to get LLMs to write something, I can't stand it, and feel like the OP here.
I think it is important to make the distinction between "blog post" and other kinds of published writing. It literally does not matter if your blog post has perfectly correct grammar or misspellings (though you should do a one-pass revision for clarity of thought). Blog posts are best for articulating unfinished thoughts. To that end, you are cheating yourself, the writer, if you use AI to help you write a blog post. It is through the act of writing it that you begin to grok with the idea.
But you bet that I'm going to use AI to correct my grammar and spelling for the important proposal I'm about to send. No sense in losing credibility over something that can be corrected algorithmically.
I don't know. Content matters more to me. Many of the articles that I read have so little information density that I find it hard to justify spending time on them.I often use AI to summarise text for me and then lookup particular topics in detail if I like.
Skimming was pretty common before AI too. People used to read and share notes instead of entire texts. AI has just made it easier.
Reading long texts is not a problem for me if its engaging. But often I find they just go on and on without getting to the point. Especially news articles.They are the worst.
This post could easily be generated by AI, no way to tell for sure. I'm more insulted if the title or blog thumbnail is misleading, or if the post is full of obvious nonsense, etc.
If a post contains valuable information that I learn from it, I don't really care if AI wrote it or not. AI is just a tool, like any other tool humans invented.
I'm pretty sure people had the same reaction 50 years ago, when the first PCs started appearing: "It's insulting to see your calculations made by personal electronic devices."
> It seems so rude and careless to make me, a person with thoughts, ideas, humor, contradictions and life experience to read something spit out by the equivalent of a lexical bingo machine because you were too lazy to write it yourself.
Agreed fully. In fact it'd be quite rude to force you to even read something written by another human being!
I'm all for your right to decide what is and isn't worth reading, be it ai or human generated.
> Everyone wants to help each other. And people are far kinder than you may think.
I want to believe that. When I was a student, I built a simple HTML page with a feedback form that emailed me submissions. I received exactly one message. It arrived encoded; I eagerly decoded it and found a profanity-filled rant about how terrible my site was. That taught me that kindness online isn’t the default - it’s a choice. I still aim for it, but I don’t assume it.
The way I view it is that the author is trying to explain their mental model, but there's only so much you can fit into prose. It's my responsibility to fill in the missing assumptions / understand why X implies Y. And all the little things like consistent word choice, tone, and even the mistakes helps with this. But mix in LLMs and now there's another layer / slightly different mental model I have to isolate, digest, and merge with the author's.
Reading an AI blog post (or reddit post, etc) just signals that the author actually just doesn't care that much about the subject.. which makes me care less too.
Not everyone has this same experience of the world. People are harsh, and how much grace they give you has more to do with who you are than what you say.
That aside, the worst problem with LLM-generated text isn’t that it’s less human, it’s that (by default) it’s full of filler, including excessive repetition and contrived analogies.
It’s not that people don’t value creativity and expression. It’s that for 90% of the communication AI is being used for, the slightly worse AI gen version that took 30 min to produce isn’t worse enough to justify spending 4 hours on the hand rolled version. That’s the reality we’re living through right now. People are eating up the productivity boosts like candy.
Is this the case when I put in the effort, spent several hours on tuning the LLM to help me the best possible way and I just use it answer the question "what is the best way to phrase this in American English?"?
I think low effort LLM use is hilariously bad. The content it produces too. Tuning it, giving is style, safeguards, limits, direction, examples, etc. can improve it significantly.
>No, don't use it to fix your grammar, or for translations, or for whatever else you think you are incapable of doing. Make the mistake. Feel embarrassed. Learn from it. Why? Because that's what makes us human!
Fellas, is it antihuman to use tools to perfect your work?
I can't draw a perfect circle by hand, that's why I use a compass. Do I need to make it bad on purpose and feel embarrassed by the 1000th time just to feel more human? Do I want to make mistakes by doing mental calculations instead of using a calculator, like a normal person? Of course not.
Where this "I'm proud of my sloppy shit, this is what's make me human" thing comes from?
We rised above other species because we learnt to use tools, and now we define to be "human"... by not using tools? The fuck?
Also, ironically, this entire post smells like AI slop.
I’m very happy that I can post ai generated blog posts from my writing. And I’m now averaging 500 unique daily visitors and quite some repeat visits and subscribers with it too. If it wasn’t for AI, then I’d go back to where it was before AI… 10 visitors per month? I don’t like writing, so I collaborate with AI to write entire blog posts. I don’t have AI “refine it”, I usually tell AI to take what I’m rambling about for 1000 words and rewrite it in my own style, cadence, rhythm and vibe. So I can generate 3-5 blog posts per week. Which surprisingly rank well, get posted on LinkedIn, Twitter and Reddit by others. So the amount of people that enjoy reading AI-generated blog posts likely is starting to outpace those who don’t at this rate.
I've always been bad at grammar, and wrote a lot of newsletters & blogs for my first startups which always got great feedback, but also lots of grammar complaints. Really happy GPT is so great at catching those nowadays, saves me a lot of Grammar supports requests ;)
I suspect that the majority of people who are shoveling BS in their blogs aren't doing it because they actually want to think and write and share and learn and be human; but rather, the sole purpose of the blog is for SEO, or to promote the personal brand of someone who doesn't want anything else.
Perhaps the author is speaking to the people who are only temporarily led astray by the pervasive BS online and by the recent wildly popular "cheating on your homework" culture?
I'm not sure if this has been mentioned here yet, and I don't want to be pedantic, but for centuries famous artists, musicians, writers, etc. have used assistants to do their work for them. The list includes (but in no way is this complete): DaVinci, Michelangelo, Rembrandt, Rubens, Raphael, Warhol, Koons, O'Keefe, Hepworth, Hockney, Stephen King, Clancy, Dumas, Patterson, Elvis, Elton John, etc. etc. Further, most scientific, engineering and artistic innovations are made "on the shoulders of giants." As the saying goes: there is nothing new under the sun. Nothing. I suggest that the use of an LLM for writing is just another tool of human creativity to be used freely and often to produce even more interesting and valuable content.
I sometimes share interesting AI conversations with my friends using the "share" button on the AI websites. Often the back-and-forth is more interesting than the final output anyway.
I think some people turn AI conversations into blog posts that they pass off as their own because of SEO considerations. If Twitter didn't discourage people sharing links, perhaps we would see a lot more tweet threads that start with https://chatgpt.com/share/... and https://claude.ai/share/... instead of people trying to pass off AI generated content as their own.
I agree, but if I would have to type one most insulting things with AI is scraping data without consent to train models, so people no longer enjoy blog posting :(
I don't see the objection to using LLMs to check for grammatical mistakes and spelling errors. That strikes me as a reactionary and dogmatic position, not a rational one.
Anyone who has done any serious writing knows that a good editor will always find a dozen or more errors in any essay of reasonable length, and very few people are willing to pay for professional proofreading services on blog posts. On the other side of the coin, readers will wince and stumble over such errors; they will not wonder at the artisanal authenticity of your post, but merely be annoyed. Wabi-sabi is an aesthetic best reserved for decor, not prose.
> Do you not enjoy the pride that comes with attaching your name to something you made on your own? It's great!
This is like saying a photographer shouldn't find the sunset they photographed pretty or be proud of the work, because they didn't personally labor to paint the image of it.
A lot more goes into a blog post than the actual act of typing the context out.
Lazy work is always lazy work, but its possible to make work you are proud of with AI, in the same way you can create work you are proud of with a camera
I am not totally sure about this. I think that AI writing is just a progression of current trends. Many things have made writing easier and lower cost - printing press, typewriters, word processors, grammer/spell checkers, electronic distribution.
This is just a continuation. It does tend to mean there is less effort to produce the output and thus there is a value degradation, but this has been true all along this technology trend.
I don't think we should be a purist as to how writing is produced.
The most thoughtful critique of this post isn’t that AI is inherently bad—but that its use shouldn’t be conflated with laziness or cowardice.
Fact: Professional writers have used grammar tools, style guides, and even assistants for decades. AI simply automates some of these functions faster. Would we say Hemingway was lazy for using a typewriter? No—we’d say he leveraged tools.
AI doesn’t create thoughts; it drafts ideas. The writer still curates, edits, and imbues meaning—just like a journalist editing a reporter’s notes or a designer refining Photoshop output. Tools don’t diminish creativity—they democratize access to it.
That said: if you’re outsourcing your thinking to AI (e.g., asking an LLM to write your thesis without engaging), then yes, you’ve lost something. But complaining about AI itself misunderstands the problem.
TL;DR: Typewriters spit out prose too—but no one blames writers for using them.
I agree with the author. If I detect that the article is written by an AI, I bounce off.
I similarly dislike other trickery as well, like ghostwriters, PR articles in journalism, lip-syncing at concerts, and so on. Fuck off, be genuine.
The thing why people are upset about AI is because AI can be used to easily generate a lot of text, but its usage is rarely disclosed. So then when someone discovers AI usage, there is no telling for the reader of how much of the article is signal, and how much is noise. Without AI, it would hinge on the expertise or experience of the author, but now with AI involved, the bets are off.
The other thing is that reading someone's text involves a little bit of forming a connection with them. But then discovering that AI (or someone else) have written the text, it feels like they betrayed that connection.
This assumes the person using LLMs to put out a blog post gives a single shit about their readers, pride, or “being human”. They don’t. They care about the view so you load the ad which makes them a fraction of a cent, or the share so they get popular so they can eventually extract money or reputation from it.
I agree with you that AI slop blog posts are a bad thing, but there are about zero people who use LLMs to spit out blog posts which will change their mind after reading your arguments. You’re not speaking their language, they don’t care about anything you do. They are selfish. The point is themselves, not the reader.
> Everyone wants to help each other.
No, they very much do not. There are a lot of scammers and shitty entitled people out there, and LLMs make it easier than ever to become one of them or increase the reach of those who already are.
As someone who briefly wrote a bunch of AI generated blog posts, I kind of agree... The voicing is terrible, and the only thing it it does particularly well is replace the existing slop.
I'm starting to pivot and realize that quality is actually way more important than I thought, especially in a world where it is very easy to create things of low quality using AI.
Another place I've noticed it is in hiring. There are so many low quality applications its insane. One application with a full GitHub and profile and cover letter and or video which actually demonstrates that you understand where you are applying is worth more than 100 low quality ones.
It's gone from a charming gimmick to quickly becoming an ick.
These days, my work routine looks something like this - a colleague sends me a long, AI-generated PRD full of changes. When I ask him for clarification, he stumbles through the explanation. Does he care at all? I have no idea.
Frustrated, I just throw that mess straight at claude-code and tell it to fix whatever nonsense it finds and do its best. It probably implements 80–90% of what the doc says — and invents the rest. Not that I’d know, since I never actually read the original AI-generated PRD myself.
In the end, no one’s happy. The whole creative and development process has lost that feeling of achievement, and nobody seems to care about code quality anymore.
If you are going to use AI to make a post, then please instruct it to make that post as short and information-dense as possible. It's one thing to read an AI summary but quite another to have to wade through paragraphs of faux "personality" and "conversational writing" of the sort that slop AIs regularly trowel out.
it's insulting to read text on a computer screen. I don't care if you write like a 5 years old or if your message will need days or weeks to reach me. Use a pen, a pencil and some paper.
It's insulting but I also find it extremely concerning that my younger colleagues can't seem to tell the difference. An article will very clearly be AI slop and I'll express frustration, only to discover that they have no idea what I"m talking about.
When you criticize, it helps to understand the other’s perspective.
I suppose I am writing to you because I can no longer speak to anyone. As people turn to technology for their every word, the space between them widens, and I am no exception. Everyone speaks, yet no one listens. The noise fills the room, and still it feels empty.
Parents grow indifferent, and their children learn it before they can name it. A sickness spreads, quiet and unseen, softening every heart it touches. I once believed I was different. I told myself I still remembered love, that I still felt warmth somewhere inside. But perhaps I only remember the idea of it. Perhaps feeling itself has gone.
I used to judge the new writers for chasing meaning in words. I thought they wrote out of vanity. Now I see they are only trying to feel something, anything at all. I watch them, and sometimes I envy them, though I pretend not to. They are lost, yes, but they still search. I no longer do.
The world is cold, and I have grown used to it. I write to remember, but the words answer nothing. They fall silent, as if ashamed. Maybe you understand. Maybe it is the same with you.
Maybe writing coldly is simply compassion, a way of not letting others feel your pain.
Typical black and white article to capitalize on I hate AI hype.
Super top articles with millions of readers are done with AI. It’s not an ai problem it’s the content. If it’s watery and no style tuned it’s bad. Same as human author
I'm looking forward to the (inevitable) AI detection browser plugin that will mark the slop for me, at least that way I don't need to spend the effort figuring out if it's AI content or not.
>No, don't use it to fix your grammar, or for translations
Okay, I can understand even drawing the line at grammar correction, in that not all "correct" grammar is desirable or personal enough to convey certain ideas.
But not for translation? AI translation, in my experience, has proven to be more reliable than other forms of machine translation, and personally learning a new language every time I need to read something non-native to me isn't reasonable.
If you’re going to AI generate your blog, the least you could do is use a fine tuned LLM that matches your style. Most people just toss a prompt into GPT 5 and call it a day.
As a test, I used AI to rewrite their blog post, keeping the same tone and context but fewer words. It got the point across, and I enjoyed it more because I didn't have to read as much. I did edit it slightly to make it a bit less obviously AI'ish...
---
Honestly, it feels rude to hand me something churned out by a lexical bingo machine when you could’ve written it yourself. I'm a person with thoughts, humor, contradictions, and experience not a content bin.
Don't you like the pride of making something that's yours? You should.
Don't use AI to patch grammar or dodge effort. Make the mistake. Feel awkward. Learn. That's being human.
People are kinder than you think. By letting a bot speak for you, you cut off the chance for connection.
Here's the secret: most people want to help you. You just don't ask. You think smart people never need help. Wrong. The smartest ones know when to ask and when to give.
So, human to human, save the AI for the boring stuff. Lead with your own thoughts. The best ideas are the ones you've actually felt.
For me it’s insulting not to use an AI to reply back. I’d say 90% of people would answer better with an AI assist in most business environments. Maybe even personal.
It’s really funny how many business deals would be better if people would put the requests in an AI to explain what exactly is requested. Most people are not able to answer and if they’d use an AI they could respond in a proper way without wasting everyone’s time. But at least not using an AI shows the competency (or better - incompetence) level.
It’s also sad that I need to tell people to put my message in an AI to don’t ask me useless questions. And AI can fill most of the gaps people don’t get it. You might say my requests are not proper, but then how an AI can figure out what I want to say? I also put my requests in an AI when I can and can create eli5 explanations of the requests “for dummies”
It’s a clever post but people that use so to write personal blogposts ain’t gonna read this and change their mind. Only people who already hate using llms are gonna cheer you on.
But this kind of content is great for engagement farming on HN.
Just write “something something clankers bad”
While I agree with the author it’s a very moot and uninspired point
I've noticed this with a significant number of news articles. Sometimes it will say that it was "enhanced" with AI, but even when it doesn't, I get that distinct robotic feel.
I would have written "lexical fruit machine", for its left to right sequential ejaculation of tokens, and its amusingly antiquated homophobic criminological implication.
slop excepted, writing is a very difficult activity that has always been outsourced to some extent, either to an individual, a team, or to some software (spell checker, etc). Of course people will use AI if they think it makes them a better writer. Taste is the only issue here.
Tangential, but when I heard the Zoom CEO say that in the future you’ll just send your AI double to a meeting for you I couldn’t comprehend how a real human being could ever think that that would be an ok thing to suggest.
The absolute bare minimum respect you can have for someone who’s making time for you is to make time for them. Offloading that to AI is the equivalent of shitting on someone’s plate and telling them to eat it.
I struggle everyday with the thought that the richest most powerful people in the world will sell their souls to get a bit richer.
What amazes me is that some people think I want to read AI slop in their blog that I could have generated by asking ChatGPT directly.
Anyone can access ChatGPT, why do we need an intermediary?
Someone a while back shared, here on HN, almost an entire blog generated by (barely touched up) AI text. It even had Claude-isms like "excellent question!", em-dashes, the works. Why would anyone want to read that?
If you struggle with communication, using AI is fine. What matters is caring about the result. You cannot just throw it over the fence.
AI content in itself isn't insulting, but as TFA hits upon, pushing sloppy work you didn't bother to read or check at all yourself is incredibly insulting and just communicates to others that you don't think their time is valuable. This holds for non-AI generated work as well, but the bar is higher by default since you at least had to generate that content yourself and thus at least engage with it on a basic level. AI content is also needlessly verbose, employs trite and stupid analogies constantly, and in general has the nauseating, bland, soulless corporate professional communication style that anyone with even a mote of decent literary taste detests.
My thing is: If you have something to say, just say it! Don't worry that it's not long enough or short enough or doesn't fit into some mold you think it needs to fit into. Just say it. As you write, you'll probably start to see your ideas more clearly and you'll start to edit and add color or clarify.
But just say it! Bypass the middleman who's just going to make it blurrier or more long-winded.
> read something spit out by the equivalent of a lexical bingo machine because you were too lazy to write it yourself.
Ha! That's a very clever spot on insult. Most LLMs would probably be seriously offended by this would thy be rational beings.
> No, don't use it to fix your grammar, or for translations, or for whatever else you think you are incapable of doing. Make the mistake.
OK, you are pushing it buddy. My mandarin is not that good; as a matter of fact, I can handle no mandarin at all. Or french to that matter. But I'm certain a decent LLM can do that without me having to resort to reach out to another person, that might not be available or have enough time to deal with my shenanigans.
I agree that there are way too much AI slop being created and made public, but yet there are way too many cases where the use is fair and used for improving whatever the person is doing.
Yes, AI is being abused. No, I don't agree we should all go taliban against even fair use cases.
It's insulting to read AI-generated blog posts
(blog.pabloecortez.com)951 points by speckx 27 October 2025 | 447 comments
Comments
You're telling me I need to use 100% of my brain, reasoning power, and time to go over your code, but you didn't feel the need to hold yourself to the same standard?
> No, don't use it to fix your grammar, or for translations, or for whatever else you think you are incapable of doing. Make the mistake. Feel embarrassed. Learn from it. Why? Because that's what makes us human!
It would be more human to handwrite your blog post instead. I don’t see how this is a good argument. The use of tools to help with writing and communication should make it easier to convey your thoughts, and that itself is valuable.
If yes, there's probably value in putting it out. I don't care if you used paper and ink, a text editor, a spell checker, or asked an LLM for help.
On the flip side, if anyone could've asked an LLM for the exact same text, and if you're outsourcing a critical thinking to the reader - then yeah, I think you deserve scorn. It's no different from content-farmed SEO spam.
Mind you, I'm what you'd call an old-school content creator. It would be an understatement to say I'm conflicted about gen AI. But I also feel that this is the most principled way to make demands of others: I have no problem getting angry at people for wasting my time or polluting the internet, but I don't think I can get angry at them for producing useful content the wrong way.
I think that's the best use case and it's not AI related as spell-checkers and translation integrations exist forever, now they are just better.
Especially for non-native speakers that work in a globalized market. Why wouldn't they use the tool in their toolbox?
Now you could argue but you don't know it was AI it could just be really mediocre writing - it could indeed but I hit the back button there as well so it's a wash either way.
AI is a tool to help you _finish_ stuff, like a wood sander. It's not something you should use as a hacksaw, or as a hammer. As long as you are writing with your own voice, it's just better autocorrect.
I recently wrote about the dead internet https://punkx.org/jackdoe/zero.txt out of frustration.
I used to fight against it, I thought we should do "proof of humanity", or create rings of trust for humans, but now I think the ship has sailed.
Today a colleague was sharing their screen on google docs and a big "USE GEMINI AI TO WRITE THE DOCUMENT" button was front and center. I am fairly certain that by end of year most words you read will be tokens.
I am working towards moving my pi-hole from blacklist to whitelist, and after that just using local indexes with some datahorading. (squid, wikipedia, SO, rfcs, libc, kernel.git etc)
Maybe in the future we just exchange local copies of our local "internet" via sdcards, like in Cuba's Sneakernet[1] El Paquete Semenal[2].
[1] https://en.wikipedia.org/wiki/Sneakernet
[2] https://en.wikipedia.org/wiki/El_Paquete_Semanal
It feels great to use. But it also feels incredibly shitty to have it used on you.
My recommendation. Just give the prompt. If if your readers want to expand it they can do so. don't pollute others experience by passing the expanded form around. Nobody enjoys that.
I do understand the reasoning behind being original, but why make mistakes when we have tools to avoid them? That sounds like a strange recommendation.
For essays, honestly, I do not feel so bad, because I can see that other than some spaces like HN the quality of the average online writer has dropped so much that I prefer to have some machine-assisted text that can deliver the content.
However, my problem is with AI-generated code.
In most of the cases to create trivial apps, I think AI-generated code will be OK to good; however, the issue that I'm seeing as a code reviewer is that folks that you know their code design style are so heavily reliant on AI-generated code that you are sure that they did not write and do not understand the code.
One example: Working with some data scientists and researchers, most of them used to write things on Pandas, some trivial for loops, and some primitive imperative programming. Now, especially after Claude Code, most of the things are vectorized, with some sort of variable naming with way-compressed naming. Sometimes folks use Cython in some data pipeline tasks or even using functional programming to an extreme.
Good performance is great, and leveling up the quality of the codebase it's a net positive; however, I wonder in some scenario when things go south and/or Claude code is not available if those folks will be able to fix it.
If I'm finding that voice boring, I'll stop reading - whether or not AI was used.
The generic AI voice, and by that I mean very little prompting to add any "flavor", is boring.
Of course I've used AI to summarize things and give me information, like when I'm looking for a specific answer.
In the case of blogs though, I'm not always trying to find an "answer", I'm just interested in what you have to say and I'm reading for pleasure.
It appears inconsiderate—perhaps even dismissive—to present me, a human being with unique thoughts, humor, contradictions, and experiences, with content that reads as though it were assembled by a lexical randomizer. When you rely on automation instead of your own creativity, you deny both of us the richness of genuine human expression.
Isn’t there pride in creating something that is authentically yours? In writing, even imperfectly, and knowing the result carries your voice? That pride is irreplaceable.
Please, do not use artificial systems merely to correct your grammar, translate your ideas, or “improve” what you believe you cannot. Make errors. Feel discomfort. Learn from those experiences. That is, in essence, the human condition. Human beings are inherently empathetic. We want to help one another. But when you interpose a sterile, mechanized intermediary between yourself and your readers, you block that natural empathy.
Here’s something to remember: most people genuinely want you to succeed. Fear often stops you from seeking help, convincing you that competence means solitude. It doesn’t. Intelligent people know when to ask, when to listen, and when to contribute. They build meaningful, reciprocal relationships. So, from one human to another—from one consciousness of love, fear, humor, and curiosity to another—I ask: if you must use AI, keep it to the quantitative, to the mundane. Let your thoughts meet the world unfiltered. Let them be challenged, shaped, and strengthened by experience.
After all, the truest ideas are not the ones perfectly written. They’re the ones that have been felt.
Now, I take a cue from school, and write the outline first. With an outline, I can use a prompt for the LLM to play the role of a development editor to help me critique the throughline. This is helpful because I tend to meander, if I'm thinking at the level of words and sentences, rather than at the level of an outline.
Once I've edited the outline for a compelling throughline, I can then type out the full essay in my own voice. I've found it much easier to separate the process into these two stages.
Before outline critiquing: https://interjectedfuture.com/destroyed-at-the-boundary/
After outline critiquing: https://interjectedfuture.com/the-best-way-to-learn-might-be...
I'm still tweaking the developement editor. I find that it can be too much of a stickler on the form of the throughline.
I've found a better approach to using AI for writing. First, if I don't bother writing it, why should you bother reading it? LLMs can be great soundboards. Treat them as teachers, not assistants. Your teacher is not gonna write your essay for you, but he will teach you how to write, and spot the parts that need clarification. I will share my process in the coming days, hopefully it will get some traction.
I recently interviewed a person for a role as senior platform architect. The person was already working for a semi reputable company. In the first interview, the conversation was okay but my gut just told me something was strange about this person.
We have the candidate a case to solve with a few diagrams, and to prepare a couple slides to discuss the architecture.
The person came back with 12 diagrams, all AI generated, littered with obvious AI “spelling”/generation mistakes.
And when we questioned the person about why they think we would gain trust and confidence in them with this obvious AI generated content, they became even aggressive.
Needless to say it didn’t end well.
The core problem is really how much time is now being wasted in recruiting with people who “cheat” or outright cheat.
We have had to design questions to counter AI cheating, and strategies to avoid wasting time.
That said, when I do try to get LLMs to write something, I can't stand it, and feel like the OP here.
But you bet that I'm going to use AI to correct my grammar and spelling for the important proposal I'm about to send. No sense in losing credibility over something that can be corrected algorithmically.
Skimming was pretty common before AI too. People used to read and share notes instead of entire texts. AI has just made it easier.
Reading long texts is not a problem for me if its engaging. But often I find they just go on and on without getting to the point. Especially news articles.They are the worst.
If a post contains valuable information that I learn from it, I don't really care if AI wrote it or not. AI is just a tool, like any other tool humans invented.
I'm pretty sure people had the same reaction 50 years ago, when the first PCs started appearing: "It's insulting to see your calculations made by personal electronic devices."
Agreed fully. In fact it'd be quite rude to force you to even read something written by another human being!
I'm all for your right to decide what is and isn't worth reading, be it ai or human generated.
I want to believe that. When I was a student, I built a simple HTML page with a feedback form that emailed me submissions. I received exactly one message. It arrived encoded; I eagerly decoded it and found a profanity-filled rant about how terrible my site was. That taught me that kindness online isn’t the default - it’s a choice. I still aim for it, but I don’t assume it.
Not everyone has this same experience of the world. People are harsh, and how much grace they give you has more to do with who you are than what you say.
That aside, the worst problem with LLM-generated text isn’t that it’s less human, it’s that (by default) it’s full of filler, including excessive repetition and contrived analogies.
[0] AI-generated fake podcasts (mostly via NotebookLM) https://www.kaggle.com/datasets/listennotes/ai-generated-fak...
I think low effort LLM use is hilariously bad. The content it produces too. Tuning it, giving is style, safeguards, limits, direction, examples, etc. can improve it significantly.
Fellas, is it antihuman to use tools to perfect your work?
I can't draw a perfect circle by hand, that's why I use a compass. Do I need to make it bad on purpose and feel embarrassed by the 1000th time just to feel more human? Do I want to make mistakes by doing mental calculations instead of using a calculator, like a normal person? Of course not.
Where this "I'm proud of my sloppy shit, this is what's make me human" thing comes from?
We rised above other species because we learnt to use tools, and now we define to be "human"... by not using tools? The fuck?
Also, ironically, this entire post smells like AI slop.
> No, don't use it to fix your grammar
How is this substantially different from using spellcheck? I don't see any problem with asking an LLM to check for and fix grammatical errors.
Perhaps the author is speaking to the people who are only temporarily led astray by the pervasive BS online and by the recent wildly popular "cheating on your homework" culture?
Therefore, if I or anyone else wanted to see it, I would simply do it myself.
I don't know why so many people can't grasp that.
I think some people turn AI conversations into blog posts that they pass off as their own because of SEO considerations. If Twitter didn't discourage people sharing links, perhaps we would see a lot more tweet threads that start with https://chatgpt.com/share/... and https://claude.ai/share/... instead of people trying to pass off AI generated content as their own.
If folks figure out a way to produce content that is human, contextual and useful... by all means.
Anyone who has done any serious writing knows that a good editor will always find a dozen or more errors in any essay of reasonable length, and very few people are willing to pay for professional proofreading services on blog posts. On the other side of the coin, readers will wince and stumble over such errors; they will not wonder at the artisanal authenticity of your post, but merely be annoyed. Wabi-sabi is an aesthetic best reserved for decor, not prose.
This is like saying a photographer shouldn't find the sunset they photographed pretty or be proud of the work, because they didn't personally labor to paint the image of it.
A lot more goes into a blog post than the actual act of typing the context out.
Lazy work is always lazy work, but its possible to make work you are proud of with AI, in the same way you can create work you are proud of with a camera
This is just a continuation. It does tend to mean there is less effort to produce the output and thus there is a value degradation, but this has been true all along this technology trend.
I don't think we should be a purist as to how writing is produced.
Most people dont care.
Fact: Professional writers have used grammar tools, style guides, and even assistants for decades. AI simply automates some of these functions faster. Would we say Hemingway was lazy for using a typewriter? No—we’d say he leveraged tools.
AI doesn’t create thoughts; it drafts ideas. The writer still curates, edits, and imbues meaning—just like a journalist editing a reporter’s notes or a designer refining Photoshop output. Tools don’t diminish creativity—they democratize access to it.
That said: if you’re outsourcing your thinking to AI (e.g., asking an LLM to write your thesis without engaging), then yes, you’ve lost something. But complaining about AI itself misunderstands the problem.
TL;DR: Typewriters spit out prose too—but no one blames writers for using them.
I similarly dislike other trickery as well, like ghostwriters, PR articles in journalism, lip-syncing at concerts, and so on. Fuck off, be genuine.
The thing why people are upset about AI is because AI can be used to easily generate a lot of text, but its usage is rarely disclosed. So then when someone discovers AI usage, there is no telling for the reader of how much of the article is signal, and how much is noise. Without AI, it would hinge on the expertise or experience of the author, but now with AI involved, the bets are off.
The other thing is that reading someone's text involves a little bit of forming a connection with them. But then discovering that AI (or someone else) have written the text, it feels like they betrayed that connection.
I agree with you that AI slop blog posts are a bad thing, but there are about zero people who use LLMs to spit out blog posts which will change their mind after reading your arguments. You’re not speaking their language, they don’t care about anything you do. They are selfish. The point is themselves, not the reader.
> Everyone wants to help each other.
No, they very much do not. There are a lot of scammers and shitty entitled people out there, and LLMs make it easier than ever to become one of them or increase the reach of those who already are.
I'm starting to pivot and realize that quality is actually way more important than I thought, especially in a world where it is very easy to create things of low quality using AI.
Another place I've noticed it is in hiring. There are so many low quality applications its insane. One application with a full GitHub and profile and cover letter and or video which actually demonstrates that you understand where you are applying is worth more than 100 low quality ones.
It's gone from a charming gimmick to quickly becoming an ick.
Jokes aside, good article.
Frustrated, I just throw that mess straight at claude-code and tell it to fix whatever nonsense it finds and do its best. It probably implements 80–90% of what the doc says — and invents the rest. Not that I’d know, since I never actually read the original AI-generated PRD myself.
In the end, no one’s happy. The whole creative and development process has lost that feeling of achievement, and nobody seems to care about code quality anymore.
I suppose I am writing to you because I can no longer speak to anyone. As people turn to technology for their every word, the space between them widens, and I am no exception. Everyone speaks, yet no one listens. The noise fills the room, and still it feels empty.
Parents grow indifferent, and their children learn it before they can name it. A sickness spreads, quiet and unseen, softening every heart it touches. I once believed I was different. I told myself I still remembered love, that I still felt warmth somewhere inside. But perhaps I only remember the idea of it. Perhaps feeling itself has gone.
I used to judge the new writers for chasing meaning in words. I thought they wrote out of vanity. Now I see they are only trying to feel something, anything at all. I watch them, and sometimes I envy them, though I pretend not to. They are lost, yes, but they still search. I no longer do.
The world is cold, and I have grown used to it. I write to remember, but the words answer nothing. They fall silent, as if ashamed. Maybe you understand. Maybe it is the same with you.
Maybe writing coldly is simply compassion, a way of not letting others feel your pain.
Super top articles with millions of readers are done with AI. It’s not an ai problem it’s the content. If it’s watery and no style tuned it’s bad. Same as human author
The reason AI is so hyped up at the moment is that you give it little, it gives you back more.
But then whose blog-post am I reading? What really is the point?
Okay, I can understand even drawing the line at grammar correction, in that not all "correct" grammar is desirable or personal enough to convey certain ideas.
But not for translation? AI translation, in my experience, has proven to be more reliable than other forms of machine translation, and personally learning a new language every time I need to read something non-native to me isn't reasonable.
---
Honestly, it feels rude to hand me something churned out by a lexical bingo machine when you could’ve written it yourself. I'm a person with thoughts, humor, contradictions, and experience not a content bin.
Don't you like the pride of making something that's yours? You should.
Don't use AI to patch grammar or dodge effort. Make the mistake. Feel awkward. Learn. That's being human.
People are kinder than you think. By letting a bot speak for you, you cut off the chance for connection.
Here's the secret: most people want to help you. You just don't ask. You think smart people never need help. Wrong. The smartest ones know when to ask and when to give.
So, human to human, save the AI for the boring stuff. Lead with your own thoughts. The best ideas are the ones you've actually felt.
It’s really funny how many business deals would be better if people would put the requests in an AI to explain what exactly is requested. Most people are not able to answer and if they’d use an AI they could respond in a proper way without wasting everyone’s time. But at least not using an AI shows the competency (or better - incompetence) level.
It’s also sad that I need to tell people to put my message in an AI to don’t ask me useless questions. And AI can fill most of the gaps people don’t get it. You might say my requests are not proper, but then how an AI can figure out what I want to say? I also put my requests in an AI when I can and can create eli5 explanations of the requests “for dummies”
But this kind of content is great for engagement farming on HN.
Just write “something something clankers bad”
While I agree with the author it’s a very moot and uninspired point
I would have written "lexical fruit machine", for its left to right sequential ejaculation of tokens, and its amusingly antiquated homophobic criminological implication.
https://en.wiktionary.org/wiki/fruit_machine
The absolute bare minimum respect you can have for someone who’s making time for you is to make time for them. Offloading that to AI is the equivalent of shitting on someone’s plate and telling them to eat it.
I struggle everyday with the thought that the richest most powerful people in the world will sell their souls to get a bit richer.
Anyone can access ChatGPT, why do we need an intermediary?
Someone a while back shared, here on HN, almost an entire blog generated by (barely touched up) AI text. It even had Claude-isms like "excellent question!", em-dashes, the works. Why would anyone want to read that?
AI content in itself isn't insulting, but as TFA hits upon, pushing sloppy work you didn't bother to read or check at all yourself is incredibly insulting and just communicates to others that you don't think their time is valuable. This holds for non-AI generated work as well, but the bar is higher by default since you at least had to generate that content yourself and thus at least engage with it on a basic level. AI content is also needlessly verbose, employs trite and stupid analogies constantly, and in general has the nauseating, bland, soulless corporate professional communication style that anyone with even a mote of decent literary taste detests.
But just say it! Bypass the middleman who's just going to make it blurrier or more long-winded.
Ha! That's a very clever spot on insult. Most LLMs would probably be seriously offended by this would thy be rational beings.
> No, don't use it to fix your grammar, or for translations, or for whatever else you think you are incapable of doing. Make the mistake.
OK, you are pushing it buddy. My mandarin is not that good; as a matter of fact, I can handle no mandarin at all. Or french to that matter. But I'm certain a decent LLM can do that without me having to resort to reach out to another person, that might not be available or have enough time to deal with my shenanigans.
I agree that there are way too much AI slop being created and made public, but yet there are way too many cases where the use is fair and used for improving whatever the person is doing.
Yes, AI is being abused. No, I don't agree we should all go taliban against even fair use cases.
If the goal is to get the job done, then use AI.
Do you really want to waste precious time for so little return?