> In this setup, UML is essentially a userspace process that cleverly employs concepts like files and sockets to launch a new Linux kernel instance capable of running its own processes. The exact mapping of these processes to the host — specifically, how the CPU is virtualized — is something I’m not entirely clear on, and I’d welcome insights in the comments. One could envision an implementation where guest threads and processes map to host counterparts but with restricted system visibility, akin to containers, yet still operating within a nested Linux kernel.
At least in the first generation of UML, the guest processes are in fact host processes. The guest kernel (a userland process) essentially runs them under ptrace() and catches all of the system calls made by the guest process and rewires them so they do operations inside of the guest kernel. They otherwise run like host processes on host CPU, though.
Completing the illusion, however, the guest kernel also skillfully rewires the guest ptrace() calls so you can still use strace or gdb inside of the guest!
It's good enough that you can go deeper and run UML inside of UML.
> What’s the real-world utility here? Is UML suitable for running isolated workloads? My educated guess is: probably not for most production scenarios.
Back in the day there were hosts offering UML VMs for rent. This is actually how Linode got its start!
I was fascinated when I first learned of [FreeBSD Jails], I wonder if right before containerization became a thing the concept was developed further for its requirements (could it have been?) it would have offered a more efficient containerization platform.
Why do they initialize a disk image with /dev/urandom instead of /dev/zero? Given it's not an encrypted disk container, I don't see any valid reason to do so, but perhaps I'm not seeing something?
I've often thought that if only UML would build on Darwin, we'd have a MacOS container solution that didn't need virtualisation. That involves two big unsolved problems though: building UML on not-linux, and building UML on not-x86.
It was great. I remember trying it about twenty years ago. The very first time I fired it up, I just typed "linux" at a prompt, and a kernel booted - right there in the terminal.
And then panicked, because it had no root. But hey, I've got a root filesystem right here!
So the second time I typed "linux root=/dev/hda1" (because we had parallel ATA drives back then).
It booted, mounted root, and of course that was the root filesystem the host was booted off.
Anyway it recovered after a power cycle and I didn't need to reinstall, and most importantly I learned not to do THAT again, which is often the important thing to learn.
Wait until you realise QEmu (and dosbox) can do this too, while running windows or Dune II, as can old versions of virtualbox (not sure about new versions)
Linux VM without VM software – User Mode Linux
(popovicu.com)120 points by arunc 20 hours ago | 22 comments
Comments
At least in the first generation of UML, the guest processes are in fact host processes. The guest kernel (a userland process) essentially runs them under ptrace() and catches all of the system calls made by the guest process and rewires them so they do operations inside of the guest kernel. They otherwise run like host processes on host CPU, though.
Completing the illusion, however, the guest kernel also skillfully rewires the guest ptrace() calls so you can still use strace or gdb inside of the guest!
It's good enough that you can go deeper and run UML inside of UML.
> What’s the real-world utility here? Is UML suitable for running isolated workloads? My educated guess is: probably not for most production scenarios.
Back in the day there were hosts offering UML VMs for rent. This is actually how Linode got its start!
It's testing. Using timetravel mode you can skip sleeps and speedup your unit tests massively.
FreeBSD Jails: https://docs.freebsd.org/en/books/handbook/jails/
And then panicked, because it had no root. But hey, I've got a root filesystem right here!
So the second time I typed "linux root=/dev/hda1" (because we had parallel ATA drives back then).
It booted, mounted root, and of course that was the root filesystem the host was booted off.
Anyway it recovered after a power cycle and I didn't need to reinstall, and most importantly I learned not to do THAT again, which is often the important thing to learn.
Wonder if it's hard to make it SMP, if too many places use something like #ifdef CONFIG_ARCH_IS_UM to tell whether it is single CPU, it might be hard.
That’s giving very firecracker vibes