Self-hosting doesn’t mean you have to buy hardware. After a few years, low-end machines are borderline unusable with Windows, but they are still plenty strong for a Linux server. It’s quite likely you or a friend has an old laptop laying around, which can be repurposed. I’ve done this with an i3 from 2011 [1] for two users, and in 2025 I have no signs that I need an upgrade.
Laptops are also quite power efficient at idle, so in the long run they make more sense than a desktop. If you are just starting, they are a great first server.
(And no, laptops don’t have an inbuilt UPS. I recommend everyone to remove the battery before using it plugged 24x7)
I highly recommend anyone going this route to use Proxmox as your base install on the (old) hardware, and then use individual LXCs/VMs for the services you run. Maybe it's just me, but I find LXCs to be much easier to manage and reason about than Docker containers, and the excellent collection of scripts maintained by the community: https://community-scripts.github.io/ProxmoxVE/scripts makes it just as easy as a Docker container registry link.
I try to use LXCs whenever the software runs directly on Debian (Proxmox's underlying OS), but it's nice to be able to use a VM for stuff that wants more control like Home Assistant's HAOS. Proxmox makes it fairly straightforward to share things like disks between LXCs, and automated backups are built in.
I get why you want to self host, although I also get why you don’t want.
Selfhosting is a pain in the ass, it needs updating docker, things break sometimes, sometimes it’s only you and not anyone else so you’re left alone searching the solution, and even when it works it’s often a bit clunky.
I have a extremely limited list of self hosted tool that just work and are saving me time (first one on that list would be firefly) but god knows i wasted quite a bit of my time setting up stuffs that eventually broke and that i just abandoned.
Today I’m very happy with paying for stuff if the company is respecting privacy and has descent pricing.
It's heartening in the new millennium to see some younger people show awareness of the crippling dependency on big tech.
Way back in the stone ages, before instagram and tic toc, when the internet was new, anyone having a presence on the net was rolling their own.
It's actually only gotten easier, but the corporate candy has gotten exponentially more candyfied, and most people think it's the most straightforward solution to getting a little corner on the net.
Like the fluffy fluffy "cloud", it's just another shrink-wrap of vendor lockin. Hook 'em and gouge 'em, as we used to say.
There are many ways to stake your own little piece of virtual ground. Email is another whole category. It's linked to in the article, but still uses an external service to access port 25. I've found it not too expensive to have a "business" ISP account, that allows connections on port 25 (and others).
Email is much more critical than having a place to blag on, and port 25 access is only the beginning of the "journey". The modern email "reputation" system is a big tech blockade between people and the net, but it can, and should, be overcome by all individuals with the interest in doing so.
I was able to replicate some of this by building my own hosting platform (https://canine.sh) that can deploy a Github repo to anywhere -- from a Kubernetes cluster to a home raspberry pi server.
I've built tons of stuff in my career, but building the thing that can host all of it for myself has been hugely rewarding (instead of relying on hosting providers that inevitably start charging you)
I now have almost 15 apps hosted across 3 clusters:
I run a Kubernetes 4x pi cluster and an Intel N150 mini PC both managed with Portainer in my homelab. The following open source ops tools have been a game changer. All tools below run in containers.
- kubetail: Kubernetes log viewer for the entire cluster. Deployments, pods, statefulsets. Installed via Helm chart. Really awesome.
- Dozzle: Docker container log viewing for the N150 mini pc which just runs docker not Kubernetes. Portainer manual install.
- UptimeKuma: Monitor and alerting for all servers, http/https endpoints, and even PostgreSQL. Portainer manual install.
- Beszel: Monitoring of server cpu, memory, disk, network and docker containers. Can be installed into Kubernetes via helm chart. Also installed manually via Portainer on the N150 mini pc.
- Semaphore UI: UI for running ansible playbooks. Support for scheduling as well. Portainer manual install.
I propose a slightly different boundary: not ”to self-host” but ”ability to self-host”. It simply means that you can if you want to, but you can let someone else host it. This is a lot more inclusive, both to those who are less technical and those who are willing to pay for it.
People who don’t care, ”I’ll just pay”, are especially affected, and the ones who should care the most. Why? Because today, businesses are more predatory, preying on future technical dependence of their victims. Even if you don’t care about FOSS, it’s incredibly important to be able to migrate providers. If you are locked in they will exploit that. Some do it so systematically they are not interested in any other kind of business.
Tooling for self-hosting is quite powerful nowadays. You can start with hosted components and swap various things in for a self-hosted bit. For instance, my blog is self-hosted on a home-server.
It has Cloudflare Tunnel in front of it, but I previously have used nginx+letsencrypt+public_ip. It stores data on Cloudflare R2 but I've stored on S3 or I could store on a local NAS (since I access R2 through FUSE it wouldn't matter that much).
You have to rent:
* your domain name - and it is right that this is not a permanent purchase
* your internet access
But almost all other things now have tools that you can optionally use. If you turn them off the experience gets worse but everything still works. It's a much easier time than ever before. Back in the '90s and early 2000s, there was nothing like this. It is a glorious time. The one big difference is that email anti-spam is much stricter but I've handled mail myself as recently as 8 years ago without any trouble (though I now use G Suite).
> The premise is that by learning some of the fundamentals, in this case Linux, you can host most things yourself. Not because you need to, but because you want to, and the feeling of using your own services just gives you pleasure. And you learn from it.
Not only that, but it helps to eliminate the very real risk that you get kicked off of a platform that you depend on without recourse. Imagine if you lost your Gmail account. I'd bet that most normies would be in deep shit, since that's basically their identity online, and they need it to reset passwords and maybe even to log into things. I bet there are a non-zero number of HN commenters who would be fucked if they so much as lost their Gmail account. You've got to at least own your own E-mail identity! Rinse and repeat for every other online service you depend on. What if your web host suddenly deleted you? Or AWS? Or Spotify or Netflix? Or some other cloud service? What's your backup? If your answer is "a new cloud host" you're just trading identical problems.
Ever since arch got an installer I’m not sure I’d consider it hard anymore. Still dumps you into a command line sure but it’s a long way away from the days of trying to figure out arcane partition block math
I spend quite some years with linux systems, but i am using llms for configurating systems a lot these days. Last week i setup a server for a group of interns. They needed a docker kubernetes setup with some other tooling. I would have spend at least a day or two to set it up normally. Now it took maybe an hour. All the configurations, commands and some issues were solved with help of chatgpt. You still need to know your stuff, but its like having a super tool at hand. Nice.
SBCs are great for public webservers and suited to save you quite a bit in energy costs. I've used a Raspbery Pi4B for about 5 years with around 10k human visitors (~5k bots) per year just fine. I'd like to try a RISC-V SBC as server, but maybe I have a few more years to wait.
I don't run into resource issues on the Pi4B, but resource paranoia (like range anxiety in EVs) keeps me on my toes about bandwidth use and encoding anyway. I did actually repurpose my former workstation and put it in a rackmount case a couple weeks ago to take over duties and take on some new ones, but it consumes so much electricity that it embarrasses me and I turned it off. Not sure what to do with it now; it is comically over-spec'd for a web server.
Most helpful thing to have is a good router; networking is a pain in the butt, and there's a lot to do when you host your own when you start serving flask servers or whatever. Mikrotik has made more things doable for me.
While I like the article and agree with the sentiment, I do feel it would have been nice to at least mention the GNU project and not leave the impression that we have free software only thanks to Linus Torvalds.
I’m almost done with my switch away from a fully Apple ecosystem and I feel great about my Framework laptop, GrapheneOS Pixel and cluster of servers in my closet.
I can’t help but wonder if mainstream adoption of open source and self hosting will cause a regulatory backlash in favour of big corpo again (thinking of Bill Gates’ letter against hobbyists)
for the past 20 odd years, old hardware with tweaked, custom compiled and build FreeBSD, NetBSD served me and my few customers quiet well. There is lot of joy in it. Recently, I started modifying open source software to be self hostable. Some does not work well enough when the internet is not accessible. for example FarmOS.
Self-Host and Tech Independence: The Joy of Building Your Own
(ssp.sh)490 points by articsputnik 7 June 2025 | 241 comments
Comments
Self-hosting doesn’t mean you have to buy hardware. After a few years, low-end machines are borderline unusable with Windows, but they are still plenty strong for a Linux server. It’s quite likely you or a friend has an old laptop laying around, which can be repurposed. I’ve done this with an i3 from 2011 [1] for two users, and in 2025 I have no signs that I need an upgrade.
Laptops are also quite power efficient at idle, so in the long run they make more sense than a desktop. If you are just starting, they are a great first server.
(And no, laptops don’t have an inbuilt UPS. I recommend everyone to remove the battery before using it plugged 24x7)
1: https://www.kassner.com.br/en/2023/05/16/reusing-old-hardwar...
I try to use LXCs whenever the software runs directly on Debian (Proxmox's underlying OS), but it's nice to be able to use a VM for stuff that wants more control like Home Assistant's HAOS. Proxmox makes it fairly straightforward to share things like disks between LXCs, and automated backups are built in.
Selfhosting is a pain in the ass, it needs updating docker, things break sometimes, sometimes it’s only you and not anyone else so you’re left alone searching the solution, and even when it works it’s often a bit clunky.
I have a extremely limited list of self hosted tool that just work and are saving me time (first one on that list would be firefly) but god knows i wasted quite a bit of my time setting up stuffs that eventually broke and that i just abandoned.
Today I’m very happy with paying for stuff if the company is respecting privacy and has descent pricing.
It raised some interesting questions:
- How long can I be productive without the Internet?
- What am I missing?
The answer for me was I should archive more documentation and NixOS is unusable offline if you do not host a cache (so that is pretty bad).
Ultimately I also found out self-hosting most of what I need and being offline really improve my productivity.
It's heartening in the new millennium to see some younger people show awareness of the crippling dependency on big tech.
Way back in the stone ages, before instagram and tic toc, when the internet was new, anyone having a presence on the net was rolling their own.
It's actually only gotten easier, but the corporate candy has gotten exponentially more candyfied, and most people think it's the most straightforward solution to getting a little corner on the net.
Like the fluffy fluffy "cloud", it's just another shrink-wrap of vendor lockin. Hook 'em and gouge 'em, as we used to say.
There are many ways to stake your own little piece of virtual ground. Email is another whole category. It's linked to in the article, but still uses an external service to access port 25. I've found it not too expensive to have a "business" ISP account, that allows connections on port 25 (and others).
Email is much more critical than having a place to blag on, and port 25 access is only the beginning of the "journey". The modern email "reputation" system is a big tech blockade between people and the net, but it can, and should, be overcome by all individuals with the interest in doing so.
I've built tons of stuff in my career, but building the thing that can host all of it for myself has been hugely rewarding (instead of relying on hosting providers that inevitably start charging you)
I now have almost 15 apps hosted across 3 clusters:
https://imgur.com/a/RYg0wzh
One of the most cherised things I've built, and I find myself constantly coming back and improving / updating out of love.
- kubetail: Kubernetes log viewer for the entire cluster. Deployments, pods, statefulsets. Installed via Helm chart. Really awesome.
- Dozzle: Docker container log viewing for the N150 mini pc which just runs docker not Kubernetes. Portainer manual install.
- UptimeKuma: Monitor and alerting for all servers, http/https endpoints, and even PostgreSQL. Portainer manual install.
- Beszel: Monitoring of server cpu, memory, disk, network and docker containers. Can be installed into Kubernetes via helm chart. Also installed manually via Portainer on the N150 mini pc.
- Semaphore UI: UI for running ansible playbooks. Support for scheduling as well. Portainer manual install.
You can only rent a domain. The landlord is merciless if you miss a payment, you are out.
There are risks everywhere, and it depresses me how fragile is our online identity.
People who don’t care, ”I’ll just pay”, are especially affected, and the ones who should care the most. Why? Because today, businesses are more predatory, preying on future technical dependence of their victims. Even if you don’t care about FOSS, it’s incredibly important to be able to migrate providers. If you are locked in they will exploit that. Some do it so systematically they are not interested in any other kind of business.
It has Cloudflare Tunnel in front of it, but I previously have used nginx+letsencrypt+public_ip. It stores data on Cloudflare R2 but I've stored on S3 or I could store on a local NAS (since I access R2 through FUSE it wouldn't matter that much).
You have to rent:
* your domain name - and it is right that this is not a permanent purchase
* your internet access
But almost all other things now have tools that you can optionally use. If you turn them off the experience gets worse but everything still works. It's a much easier time than ever before. Back in the '90s and early 2000s, there was nothing like this. It is a glorious time. The one big difference is that email anti-spam is much stricter but I've handled mail myself as recently as 8 years ago without any trouble (though I now use G Suite).
https://www.reddit.com/r/selfhosted/comments/1kqrwev/im_addi...
Not only that, but it helps to eliminate the very real risk that you get kicked off of a platform that you depend on without recourse. Imagine if you lost your Gmail account. I'd bet that most normies would be in deep shit, since that's basically their identity online, and they need it to reset passwords and maybe even to log into things. I bet there are a non-zero number of HN commenters who would be fucked if they so much as lost their Gmail account. You've got to at least own your own E-mail identity! Rinse and repeat for every other online service you depend on. What if your web host suddenly deleted you? Or AWS? Or Spotify or Netflix? Or some other cloud service? What's your backup? If your answer is "a new cloud host" you're just trading identical problems.
I don't run into resource issues on the Pi4B, but resource paranoia (like range anxiety in EVs) keeps me on my toes about bandwidth use and encoding anyway. I did actually repurpose my former workstation and put it in a rackmount case a couple weeks ago to take over duties and take on some new ones, but it consumes so much electricity that it embarrasses me and I turned it off. Not sure what to do with it now; it is comically over-spec'd for a web server.
Most helpful thing to have is a good router; networking is a pain in the butt, and there's a lot to do when you host your own when you start serving flask servers or whatever. Mikrotik has made more things doable for me.
Which is not exactly what you want from a gaming PC.
I can’t help but wonder if mainstream adoption of open source and self hosting will cause a regulatory backlash in favour of big corpo again (thinking of Bill Gates’ letter against hobbyists)
[0] https://sandstorm.org [1] https://umbrel.com/
Great read!