Using Claude Code to modernize a 25-year-old kernel driver

(dmitrybrant.com)

Comments

theptip 8 September 2025
A good case study. I have found these two to be good categories of win:

> Use these tools as a massive force multiplier of your own skills.

Claude definitely makes me more productive in frameworks I know well, where I can scan and pattern-match quickly on the boilerplate parts.

> Use these tools for rapid onboarding onto new frameworks.

I’m also more productive here, this is an enabler to explore new areas, and is also a boon at big tech companies where there are just lots of tech stacks and frameworks in use.

I feel there is an interesting split forming in ability to gauge AI capabilities - it kinda requires you to be on top of a rapidly-changing firehose of techniques and frameworks. If you haven’t spent 100 hours with Claude Code / Claude 4.0 you likely don’t have an accurate picture of its capabilities.

“Enables non-coders to vibe code their way into trouble” might be the median scenario on X, but it’s not so relevant to what expert coders will experience if they put the time in.

jillesvangurp 8 September 2025
I think this is illustrative of the kind of productive things you can do with an LLM if you know what you are doing. Is it perfect, no. Can they do useful things if you prompt correctly, absolutely. It helps knowing what you are doing and having enough skill to make good judgment calls yourself.

There are currently multiple posts per day on HN that escalate into debates on LLMs being useful or not. I think this is a clear example that it can be. And results count. Porting and modernizing some ancient driver is not that easy. There's all sorts of stuff that gets dropped from the kernel because it's just too old to bother maintaining it and when nobody does, deleting code becomes the only option. This is a good example. I imagine, there are enough crusty corners in the kernel that could benefit from a similar treatment.

I've had similar mixed results with agentic coding sometimes impressing me and other times disappointing me. But if you can adapt to some of these limitations it's alright. And this seems to be a bit of a moving goalpost thing as well. Things that were hard a few months ago are now more doable.

eisa01 8 September 2025
I've used Claude Code in the past month to do development on CoMaps [1] using the 20 USD/month plan.

I've been able to do things that I would not have the competence for otherwise, as I do not have a formal software engineering background and my main expertise is writing python data processing scripts.

E.g., yesterday I fixed a bug [2] by having Claude compare the CarPlay and iOS search implementations. It did at first suggest another code change than the one that fixed it, but that felt just like a normal part of debugging (you may need to try different things)

Most of my contributions [3] have been enabled by Claude, and it's also been critical to identify where the code for certain things are located - it's a very powerful search in the code base

And it is just amazing if you need to write a simple python script to do something, e.g., in [4]

Now this would obviously not be possible if everyone used AI tools and no one knew the existing code base, so the future for real engineers and architects is bright!

[1] https://codeberg.org/comaps/comaps [2] https://codeberg.org/comaps/comaps/pulls/1792 [3] https://codeberg.org/comaps/comaps/pulls?state=all&type=all&... [4] https://codeberg.org/comaps/comaps/pulls/1782

lukaslalinsky 8 September 2025
I mainly use Claude Code for things I know, where I just don't want to focus on the coding part. However, I recently found a very niche use. I have a small issue with an open source project. Instead of just accepting it, it occurred to me I can just clone the repo, and ask CC to look into my issue. For example, I was annoyed with Helix/Zed that replacing parameter in Zig code only works for function declarations, not function calls. I suspected it will be in the tree-sitter grammar, but I let it go through the Zed source code, then it asked for the grammar, so I cloned it and gave it access to that, and it happily fixed the grammar for me and tested the results. It needed a few nudges to make the fix properly, but I spent maybe 5 minutes on this, while CC was probably working for half an hour. I even had it fork the repo, and open the PR for me. In the end I have an useful change that people will benefit from, that I'd never attempt myself.
codedokode 8 September 2025
LLMs are also good for writing quick experiments and benchmarks to satisfy someone's curiosity. For example, once I was wondering, how much time does it take to migrate a cache line between cores when several processes access the same variable - and after I wrote a detailed benchmark algorithm, LLM generated the code instantly. Note that I described the algorithm completely and what it did is just translated it into the code. Obviously I could write the code myself, but I might need to lookup a function (how does one measure elapsed time?), I might make mistakes in C, etc. Another time a made a benchmark to compare linear vs tree search for finding a value in a small array.

It's very useful when you get the answer in several minutes rather than half a hour.

d4rkp4ttern 8 September 2025
> using these tools as a massive force multiplier…

Even before tools like CC it was the case that LLMs enabled venturing into projects/areas that would be intimidating otherwise. But Claude-Code (and codex-cli as of late) has made this massively more true.

For example I recently used CC to do a significant upgrade of the Langroid LLM-Agent framework from Pydantic V1 to V2, something I would not have dared to attempt before CC:

https://github.com/langroid/langroid/releases/tag/0.59.0

I also created nice collapsible html logs [2] for agent interactions and tool-calls, inspired by @badlogic/Zechner’s Claude-trace [3] (which incidentally is a fantastic tool!).

[2] https://github.com/langroid/langroid/releases/tag/0.57.0

[3] https://github.com/badlogic/lemmy/tree/main/apps/claude-trac...

And added a DSL to specify agentic task termination conditions based on event-sequence patterns:

https://langroid.github.io/langroid/notes/task-termination/

Needless to say, the docs are also made with significant CC assistance.

meander_water 8 September 2025
> Be as specific as possible, making sure to use the domain-specific keywords for the task.

If you don't have the technical understanding of a language or framework, there is going to be a lot of ambiguity in your prompts.

This specificity gap leads the LLM to fill in those gaps for you, which may not be what you intended. And that's usually where bugs hide.

I think this is the flip side to being a "force multiplier"

Brendinooo 8 September 2025
When I read an article like this it makes me think about how the demand for work to be done was nowhere close to being fully supplied by the pre-LLM status quo.
jabl 8 September 2025
Blast from the past! When I was a kid we had such a floppy tape device connected to a 386 or 486 computer my parents had. I think it was a Colorado Jumbo 250. I think the actual capacity was 125MB, but the drive or the backup software had some built-in compression, hence why it was marketed as a 250MB drive. Never tried to use it with the Linux ftape driver, though.

It wouldn't surprise me if the drive and the tapes are still somewhere in my parents storage. Could be a fun weekend project to try it out, though I'm not sure I have any computer with a floppy interface anymore. And I don't think there's anything particularly interesting on those tapes either.

In any case, cool project! Kudos to the author!

0xbadcafebee 8 September 2025
I had a suspicion AI would lower the barrier to entry for kernel hacking. Glad to see it's true. We could soon see much wider support for embedded/ARM hardware. Perhaps even completely new stripped-down OSes for smart devices.
rmoriz 8 September 2025
I was banned from an OpenSource project [1] recently because I suggested a bug fix. Their „code of conduct“ not only prevents PRs but also comments on issues with information that was retrieved by any AI tool or resource.

Thinking about asking Claude to reimplement it from scratch in Rust…

[1] https://codeberg.org/superseriousbusiness/gotosocial/src/bra...

csmantle 8 September 2025
It's a good example of a developer who knows what to do with and what to expect from AI. And a healthy sprinkle of skepticism, because of which he chose to make the driver a separate module.
tedk-42 8 September 2025
Really is an exciting future ahead. So many lost arts that don't need a dedicated human to relearn deep knowledge required to make an update.

A reminder though these LLM calls cost energy and we need reliable power generation to iterate through this next tech cycle.

Hopefully all that useless crypto wasted clock cycle burn is going to LLM clock cycle burn :)

sedatk 8 September 2025
Off-topic, but I wish Linux had a stable ABI for loadable kernel modules. Obviously the kernel would have to provide shims for internal changes because internal ABI constantly evolves, so it would be costly and the drivers would probably run slower over time. Yet, having the ability to use a driver from 15 years ago can be a huge win at times. That kind of compatibility is one of the things I love about Windows.
mintflow 8 September 2025
When I was port fd.io vpp to apple platform for my App, there is code that's implement coroutine in inline ASM in a C file but not in Apple supported syntax, I have succesfully use Claude web interface to get the job done (Claude code was not yet released), though as like in this article, I have strong domain specific knowledge to provide a relevant prompt to the code.

Nowadays I heavily rely Claude Code to write code, I start a task by creating a design, then I write a bunch of prompt which cover the design details and detail requirements and interaction/interface with other compoments. So far so good, it boost the productivity much.

But I am really worrying or still not be able to believe this is the new norm of coding.

aussieguy1234 8 September 2025
AI works better when it has an example. In this case, all the code needed for the driver to work was already there as the example. It just had to update the code to reflect modern kernel development practices.

The same approach can be used to modernise other legacy codebases.

I'm thinking of doing this with a 15 year old PHP repo, bringing it up to date with Modern PHP (which is actually good).

brainless 8 September 2025
Empowering people is a lovely thing.

Here the author has a passion/side project they have been on for a while. Upgrading the tooling is a great thing. Community may not support this since the niche is too narrow. LLM comes in and helps in the upgrade. This is exactly what we want - software to be custom - for people to solve their unique edge cases.

Yes author is technical but we are lowering the barrier and it will be lowered even more. Semi technical people will be able to solve some simpler edge cases, and so one. More power to everyone.

wg0 8 September 2025
I have used Gemeni and OpenAI models too but at this point - Sonnet is next level undisputed King.

I was able to port a legacy thermal printer user mode driver from legacy convoluted JS to pure modern Typescript in two to three days at the end of which printer did work.

Same caveats apply - I have decent understanding of both languages specifically various legacy JavaScript patterns for modularity to emulate other language features that don't exist in JavaScript such as classes etc.

globular-toast 8 September 2025
I don't think we really need an article a day fawning over LLMs. This is what they do. Yep.

Only thing I got from this is nostalgia from the old PC with its internals sprawled out everywhere. I still use desktop PCs as much as I can. My main rig is almost ten years old and it's been upgraded countless times although is now essentially "maxed out". Thank god for PC gamers, otherwise I'm not sure we'd still have PCs at all.

athrowaway3z 8 September 2025
> so I loaded the module myself, and iteratively pasted the output of dmesg into Claude manually,

One of the things that has Claude as my goto option is its ability to start long-running processes, which it can read the output of to debug things.

There are a bunch of hacks you could have used here to skip the manual part, like piping dmesg to a local udp port and having Claude start a listener.

AdieuToLogic 8 September 2025
Something not yet mentioned by other commenters is the "giant caveat":

  As a giant caveat, I should note that I have a small bit of 
  prior experience working with kernel modules, and a good 
  amount of experience with C in general, so I don’t want to 
  overstate Claude’s success in this scenario. As in, it 
  wasn’t literally three prompts to get Claude to poop out a 
  working kernel module, but rather several back-and-forth 
  conversations and, yes, several manual fixups of the code. 
  It would absolutely not be possible to perform this 
  modernization without a baseline knowledge of the internals 
  of a kernel module.
Of note is the last sentence:

  It would absolutely not be possible to perform this 
  modernization without a baseline knowledge of the internals 
  of a kernel module.
This is critical context when using a code generation tool, no matter which one chosen.

Then the author states in the next section:

  Interacting with Claude Code felt like an actual 
  collaboration with a fellow engineer. People like to 
  compare it to working with a “junior” engineer, and I think 
  that’s broadly accurate: it will do whatever you tell it to 
  do, it’s eager to please, it’s overconfident, it’s quick to 
  apologize and praise you for being “absolutely right” when 
  you point out a mistake it made, and so on.
I don't know what "fellow engineers" the author is accustomed to collaborating with, junior or otherwise, but the attributes enumerated above are those of a sycophant and not any engineer I have worked with.

Finally, the author asserts:

  I’m sure that if I really wanted to, I could have done this 
  modernization effort on my own. But that would have 
  required me to learn kernel development as it was done 25 
  years ago.
This could also be described as "understanding the legacy solution and what needs to be done" when the expressed goal identified in the article title is:

  ... modernize a 25-year-old kernel driver
Another key activity identified as a benefit to avoid in the above quote is:

  ... required me to learn ...
rob_c 8 September 2025
Qudos to the author.

I keep beating on the drum that they correctly point out. It's not perfect. But it saves hours and hours of work in generating compared to small conceptual debugging.

The era of _needing_ teams of people to spit out boilerplate is coming to an end. I'm not saying doing learn to write it, learning demands doing, making mistakes and personal growth. But after you've mastered this there's no need to waste time writing booklet plate on the clock unless you truly enjoy it.

This is a perfect example of time taken to debug small mistakes << time to start from scratch as a human.

Time, equivalent money, energy saved all a testament to what is possible with huge context windows and generic modern LLMs :) :) :)

miki123211 8 September 2025
IMO, the most under-appreciated trick when working with these coding agents is to give them an automatic way to check their work.
MrContent04 9 September 2025
It’s fascinating to see LLMs breathe new life into legacy code. But I wonder — if AI rewrites outpace human review, are we just creating a new layer of technical debt? Maybe the real challenge is balancing modernization with long-term maintainability.
fourthark 8 September 2025
Upgrades and “collateral evolution” are very strong use cases for Claude.

I think the training data is especially good, and ideally no logic needs to change.

DrNosferatu 8 September 2025
Uses like this will only get more pervasive.
yieldcrv 8 September 2025
I’ve been doing assembly subroutines in Solidity for years with LLMs, I wouldn't even have tried beforehand
unethical_ban 8 September 2025
Neat stuff. I just got Claude code and am training myself on Rails, I'm excited to have assistance working through some ideas I have and seeing it handle this kind of iterative testing is great.

One note: I think the author could have modified sudoers file to allow loading and unloading the module* without password prompt.

anonymousiam 8 September 2025
I hope Dmitry did a good job. I've got a box of 2120 tapes with old backups from > 20 years ago, and I'm in the process of resurrecting the old (486) computer with both of my tape drives (floppy T-1000 and SCSI DDS-4). It would be nice to run a modern kernel on it.
fho 8 September 2025
I wonder if the author could now go one step further and wrote some code to interface the take drive with an ESP32, thereby removing the floppy drive from the equation and going straight to USB.
grim_io 8 September 2025
What a great use case.

It demonstrates how much the LLM use can boost productivity on specific tasks where the complete manual implementation would take much longer than the verification.

IshKebab 8 September 2025
> From this point forward, since loading/unloading kernel modules requires sudo, I could no longer let Claude “iterate” on such sensitive operations by itself.

Hilarious! https://xkcd.com/1200/

criticalfault 8 September 2025
Would be good to do the same to 'modernize' disassembled drivers for various devices in mobile phones.

Would give postmarketos a boost.

bgwalter 8 September 2025
There is literally a GitHub repository, six years old, that ports an out-of-tree ftape driver to modern Linux:

https://github.com/Godzil/ftape

Could it be that Misanthropic has trained on that one?

vkaku 8 September 2025
Excellent. This is the kind of W that needs more people to jump into.
MagicMoonlight 8 September 2025
Is Claude code better than ChatGPT?
lloydatkinson 8 September 2025
I hope it gets mainlined again!
Keyframe 8 September 2025
pipe dream - now automate Asahi development to M3, M4, and onwards.
punnerud 8 September 2025
What was the new speed after the upgrade?
rvz 8 September 2025
No tests whatsoever. This isn't getting close to being merged into mainline and it will stay out-of-tree for a long time.

That's even before taking on the brutal linux kernel mailing lists for code review explaining what that C code does which could be riddled with bugs that Claude generated.

No thanks and no deal.