A lot of the reasons for disk partition are pretty legacy.
/boot was/is often a separate partition because the boot loaders may not support all filesystems. So if you wanted to use LVM, reiserfs, zfs, etc. for your root directory or have a RAID root then you may have a separate partition for /boot. Especially on older systems where grub/lilo only supported ext file systems. On modern systems there is likely to be a /boot/efi partition for UEFI (it only supports vfat I think?).
/home is often handy to keep separate since you can more easily re-format everything except your home drive. Make distro-hopping a bit easier.
The other reasons are more focused on server-usage rather than home-usage. Things like mounting /tmp on a separate FS so that users couldn’t fill up disk space that would block other users from working in their home directory. Or /usr/local being an NFS mount to provide centralized applications.
These days the actual on-disk partitions don’t matter as much due to LVM, ZFS and BTRFS. You can now slice and dice your disks however you like and even change things on the fly. I only ever create 1 disk partition anymore (2 if I need a separate /boot or /boot/efi) and then handle the rest in the filesystems or LVM.
With these higher-level partitioning the benefits are more around snapshotting and backups. You can snapshot your /home partition easily with btrfs before making major changes. Or you can copy a zfs partition to a remote server for backup. Things like the immutable distros and proxmox use this functionality a lot since a) partitions in these tools are cheap and b) it’s easier to do these things at the partition-level.
Edit: Fun fact: Linux ext* filesystems have the capability to reserve a certain percentage of disk space only for the root user. Useful on a multi-user system where you don’t want users filling up all the disk space and blocking the root user from logging in to clean it all up. It used to save something like 5-10% by default but I don’t know if that’s the case anymore. You can see if it’s being done with
tune2fs -l <device>
.some partitions are useful. Keeping /var and /tmp separate can stop DoS attacks by now allowing logs to fill the entire drive /home means you can wipe the / partition and keep user data.
I’ve had a full /var partition cause all sorts of problems using the system. But I still think it’s good to have four partitions /, /var, /tmp, and /home. At least split out /home so you can format / without losing your stuff in /home.
I think it is better to partition /usr (and /usr/local) too, for stability and security
I can definitely see doing that on a server many people are using. For my personal server, I used to do that, but in the end I couldn’t find much benefit, and only headache (“ahhhh / is short on space because I forgot to clean up old kernels…”).
I think it would save you someday, when there is nothing writing in /usr so the writing in /home would not cause much damage. On a system with a huge root partition, an incomplete writing might damage the whole filesystem.
Fsck would be faster. newfs (mkfs) would be faster. I found NetBSD spend so much time when it do newfs a 32G root partition (installing NetBSD in hyper-v).
Also for the /tmp partition, we can use memory filesystem (tmpfs) if we have 4G of RAM or more, instead of physical disk to store things that are cleaned on reboot.
I’m not saying it can’t happen, but I’ve been using Linux since the late 90s and have never had a problem with an incomplete write damaging the file system, or really anything else (except for a recent incident when a new motherboard decided to overwrite the partition tables on my RAID5 array, but that’s a different story). And I have UPSs on the server and desktop, and of course the laptop has a battery in it, so the risk of sudden power loss is extremely low.
The /tmp thing in RAM is interesting. I was reconfiguring my server’s drive the other day, because I didn’t originally allocate enough space to /var - it worked fine for years until I started playing with plex, jellyfin, and Home Assistant (the latter due to the database size). I was shocked to find /tmp only had a few files in it, after running for years. I think I switched the server to Debian in 2018 or 2019, but that’s just a guess based on the file dates I’m seeing. Maybe Debian cleans the /tmp partition regularly.
A separate /home can save you hours or even days in several occasions however don’t try crazy things like trying to have KDE of Ubuntu share same theme/settings with KDE6. A /var on a fast drive can create wonders too.
I installed Arch on a disk without erasing the /home partition that cames from a previous distro. It saves me some config work, and a bit of disk life expectancy I guess.
At least have a dedicated /home partition. This way if you want to upgrade the OS, change distribution, heck even migrate to a totally different OS your actual data is safe. Also if you need to do a backup, “just” backup /home which is probably going to be significantly faster and convenient than the entire OS. It also avoid using e.g
dd
and get a rather opaque file.TL;DR: yes /home keeps your data safe
I’m surprised no one’s mentioned the security implications. Mounting with nosuid and nodev options can undermine rootkit or privileged escalation exploits.
Partitioning does have benefits especially for enterprise scenarios. It allows you to specify different policies per mount point (i.e. no executables on /tmp, etc.). It prevents a runaway process from filling your hard disk with logs. It lets you keep your data separated from your OS, or have multiple OSs with the same home partition.
For home use you’ll probably go with something simpler, like separated home, root and games partitions, for instance.
Nowadays you should opt for LVM volumes or BTRFS subvolumes instead of partitions as these are way more flexible should you change your mind in the future about the sizes you allocated.
Yeah, I really like the archinstall default btrfs layout, 1 subvolume for each of these
└─root 254:0 0 1.8T 0 crypt /var/log /var/cache/pacman/pkg /home /.snapshots /
Partitioning have benefits. It is quite easy to set up “modern gnu/linux” since they all use a graphical installer. For sizes you can refer to openbsd’s disklabel(8) man page.
It increase stability and security. Not only for enterprise.
/ and /boot are (arguably) all you need on a single disk system
Unless you need to dual-boot.
Why not put everything in one big partition
https://marc.info/?l=openbsd-misc&m=154054091026039&w=3
A comment: The guy who make that video might be a troll, I reviewed his videos’ titles.
And such bullshit is much more accessible in plain text form.
This is mostly a worthless discussion. A computer / device should be considerable disposable as well as all the data on it. Just sync everything real time to a local “server” with something like Syncthing and if something goes wrong with your machines resync it back. Done.
Oh yeah, but did you know your server is a computer/device and therefore should be considered disposable, too? Checkmate, atheists! \s
Honestly, though, you’re not wrong about how always having multiple copies of your data on separate devices is essential. (You do however also need backups, not just synchronized copies, because data-destroying fuck-ups can get sync’d too.)
I’m not sure what your comment has to do with partitioning, though.
Ahahaha nice comment. I never said I didn’t have backups, the thing is that once you get your data across multiple machines with something like Synching your life becomes way better and things are easier to deal with. Even if my “server” dies I still have three more real time copies of the data (or at least one actually real time and two others a bit behind because those machines aren’t always turned on) and the “server” backups to another local drive a long term offsite backup that gets updated from time to time.
I’m not sure what your comment has to do with partitioning, though.
People usually go about and suggest partitioning their disks because they might require to reinstall the system and that way your home directory “will be safe” from whatever mess forced them into a reinstall. In reality this will just introduce unnecessary complexity and it is as likely to fail as single partition system. To be fair I would rather consider a BRTFS sub-volume for home with regular snapshots is way more interesting and manageable than just dumb partitions.
When the only tool you have is a hammer…
Will syncthing help me dual boot then? Our setup EFI? Or boot into a system that uses LVM for a root mount point even if the boot loader doesn’t support LVM?
People usually go about and suggest partitioning their disks because they might require to reinstall the system and that way your home directory “will be safe” from whatever mess forced them into a reinstall. In reality this will just introduce unnecessary complexity and it is as likely to fail as single partition system. I would rather consider a BRTFS sub-volume for home with regular snapshots is way more interesting and manageable than just dumb partitions.
I think when people talk about partitions these days they typically mean things like LVM or sub-volumes. I would also recommend only having 1 or 2 physical disk partitions and then doing all your partitioning in software.
But the examples I provided above all require on-disk partitions to work. UEFI doesn’t know what a btrfs sub-volume is.