Fascinating visualization. To think, we can visualize the entire* process but cannot understand the inner workings of a model with regards to decision making. This was true last I looked into it a year or so ago, not aware on any advancements in that aspect.
It's fascinating, even though my knowledge to LLM is so limited that I don't really understand what's happening. I'm curious how the examples are plotted and how much resemblance they are to the real models, though. If one day we could reliably plot a LLM into modules like this using an algorithm, does that mean we would be able to turn LLMs into chips, rather than data centers?
This is awesome! Would be cool if these LLM visualizations were turned into teaching tools, like showing how attention moves during generation or how prompts shift the model’s output. Feels like that kind of interactive view could really help people get what’s going on under the hood.
This is a fantastic visualization, but it and the rest of the literature all boil down to "input text goes in, we do some linear algebra on that and the model weights together, and... magic comes out." Of course, the precise incantations of the linear algebra _are_ important, the whole thing is worthless without the attention method, but that's just a method, a fairly simple one at that relative to what it does.
How does it get from the ideas to the intelligence? What if we saw intelligence as the ideas themselves?
I have a related question I guess, it relates to how I can visualise the foundations of this beyond just a code implementation.
Where does this come from in abstract/math? Did we not have it before, or did we just not consider it an avenue to go into? Or is it just simply the idea of scraping the entirety of human knowledge was just not considered until someone said "well, we could just scrape everything?"
Were there recent breakthroughs from what we've understood about ML that have lead to this current explosion of research and pattern discovery and refinement?
I wish n-gate was still around. He would note the high vote to comment ratio. When HN has little to say, it's always a sign of a high quality very technical article.
On a more serious note, this highlights a deeper issue with HN, similar sites and the attention economy. When an article takes a lot of time to read:
- The only people commenting at first have no read it.
- By the time you are done reading it, it's no longer visible on the front page so new people are not coming in anymore and the discussion appears dead. This discourages people who read it from making thoughtful comments because few people will read it.
- There are people who wait for the discussion to die down so they can read it without missing the later thoughtful comments but they are discouraged from participating earlier while the discussion is alive because then they'd have to wade through the constantly changing discussion and separate what they have already seen from what they haven't.
---
Back on topic, I'd love to see this with weights from an actual working model and a customizable input text so we could see how both the seed and input affects the output. And also a way to explore vectors representing "meanings" the way 3blue1brown did in his LLM videos.
This is really great, and I'm excited to deep dive into it. I think combined with observability tools, this resource empowers scientists to break open what people presume to be a "black box".
Dangit dunno what Add-on is interfering but not working on my current Firefox profile. (same user.js in a different profile but its working fine there)
LLM Visualization
(bbycroft.net)640 points by gmays 4 September 2025 | 46 comments
Comments
LLM Visualization - https://news.ycombinator.com/item?id=38505211 - Dec 2023 (131 comments)
The Illustrated Transformer: https://jalammar.github.io/illustrated-transformer/
Sebastian Raschka, PhD has a post on the architectures: https://magazine.sebastianraschka.com/p/from-gpt-2-to-gpt-os...
This HN comment has numerous resources: https://news.ycombinator.com/item?id=35712334
How does it get from the ideas to the intelligence? What if we saw intelligence as the ideas themselves?
Where does this come from in abstract/math? Did we not have it before, or did we just not consider it an avenue to go into? Or is it just simply the idea of scraping the entirety of human knowledge was just not considered until someone said "well, we could just scrape everything?"
Were there recent breakthroughs from what we've understood about ML that have lead to this current explosion of research and pattern discovery and refinement?
On a more serious note, this highlights a deeper issue with HN, similar sites and the attention economy. When an article takes a lot of time to read:
- The only people commenting at first have no read it.
- By the time you are done reading it, it's no longer visible on the front page so new people are not coming in anymore and the discussion appears dead. This discourages people who read it from making thoughtful comments because few people will read it.
- There are people who wait for the discussion to die down so they can read it without missing the later thoughtful comments but they are discouraged from participating earlier while the discussion is alive because then they'd have to wade through the constantly changing discussion and separate what they have already seen from what they haven't.
---
Back on topic, I'd love to see this with weights from an actual working model and a customizable input text so we could see how both the seed and input affects the output. And also a way to explore vectors representing "meanings" the way 3blue1brown did in his LLM videos.