The previous approach used `df` to get usable space and then added a
fixed size to that number in order to account for filesystem
overhead. However, at some point that stopped working for me. It
appears that ext4 filesystem overhead can vary over time or because of
other factors. (Certainly now that I think about it the old code would
only work well for people with the exact same filesystem size as me.)
So the new approach is to just completely ignore what `df` tells us
and instead go directly to the source: the filesystem's internal
notion of exactly how much space it takes up. We use `dumpe2fs` to
retrieve this information and calculate the on-disk size dynamically
from that. Then we add the space that boot data takes up (unchanged),
and we add 5MB padding because when I tested this it didn't quite add
up otherwise. https://unix.stackexchange.com/a/13551/29146 suggests
that this unaccounted-for data may be e.g. additional copies of the
superblock.