This adds the file-reflink storage driver. It is never selected
automatically for pool creation, especially not the creation of
'varlibqubes' (though it can be used if set up manually).
The code is quite small:
reflink.py lvm.py file.py + block-snapshot
sloccount 334 lines 447 (134%) 570 (171%)
Background: btrfs and XFS (but not yet ZFS) support instant copies of
individual files through the 'FICLONE' ioctl behind 'cp --reflink'.
Which file-reflink uses to snapshot VM image files without an extra
device-mapper layer. All the snapshots are essentially freestanding;
there's no functional origin vs. snapshot distinction.
In contrast to 'file'-on-btrfs, file-reflink inherently avoids
CoW-on-CoW. Which is a bigger issue now on R4.0, where even AppVMs'
private volumes are CoW. (And turning off the lower, filesystem-level
CoW for 'file'-on-btrfs images would turn off data checksums too, i.e.
protection against bit rot.)
Also in contrast to 'file', all storage features are supported,
including
- any number of revisions_to_keep
- volume.revert()
- volume.is_outdated
- online fstrim/discard
Example tree of a file-reflink pool - *-dirty.img are connected to Xen:
- /var/lib/testpool/appvms/foo/volatile-dirty.img
- /var/lib/testpool/appvms/foo/root-dirty.img
- /var/lib/testpool/appvms/foo/root.img
- /var/lib/testpool/appvms/foo/private-dirty.img
- /var/lib/testpool/appvms/foo/private.img
- /var/lib/testpool/appvms/foo/private.img@2018-01-02T03:04:05Z
- /var/lib/testpool/appvms/foo/private.img@2018-01-02T04:05:06Z
- /var/lib/testpool/appvms/foo/private.img@2018-01-02T05:06:07Z
- /var/lib/testpool/appvms/bar/...
- /var/lib/testpool/appvms/...
- /var/lib/testpool/template-vms/fedora-26/...
- /var/lib/testpool/template-vms/...
It looks similar to a 'file' pool tree, and in fact file-reflink is
drop-in compatible:
$ qvm-shutdown --all --wait
$ systemctl stop qubesd
$ sed 's/ driver="file"/ driver="file-reflink"/g' -i.bak /var/lib/qubes/qubes.xml
$ systemctl start qubesd
$ sudo rm -f /path/to/pool/*/*/*-cow.img*
If the user tries to create a fresh file-reflink pool on a filesystem
that doesn't support reflinks, qvm-pool will abort and mention the
'setup_check=no' option. Which can be passed to force a fallback on
regular sparse copies, with of course lots of time/space overhead. The
same fallback code is also used when initially cloning a VM from a
foreign pool, or from another file-reflink pool on a different
mountpoint.
'journalctl -fu qubesd' will show all file-reflink copy/rename/remove
operations on VM creation/startup/shutdown/etc.
* qubesos/pr/180:
vm/qubesvm: default to PVH unless PCI devices are assigned
vm/qubesvm: expose 'start_time' property over Admin API
vm/qubesvm: revert backup_timestamp to '%s' format
doc: link qvm-device man page for qvm-block, qvm-pci, qvm-usb
Test base functions of dom0 module (creating VM, setting property) and
configuring system inside of VM (through DispVM). The later is done for
each available template (the process use salt installed in that
template, not copied from dom0).
QubesOS/qubes-issues#3316
This qrexec is meant for services, which require some kind of
"registering" before use. After registering, the backend should invoke
this call with frontend as the intended destination, with the actual
service in argument of this call and the argument as the payload.
By default this qrexec is disabled by policy.
Signed-off-by: Wojtek Porczyk <woju@invisiblethingslab.com>
Add auto_cleanup property, which remove DispVM after its shutdown
- this is to unify DispVM handling - less places needing special
handling after DispVM shutdown.
New DispVM inherit all settings from respective AppVM. Move this from
classmethod `DispVM.from_appvm()`, to DispVM constructor. This unify
creating new DispVM with any other VM class.
Notable exception are attached devices - because only one running VM can
have a device attached, this would prevent second DispVM started from
the same AppVM. If one need DispVM with some device attached, one can
create DispVM with auto_cleanup=False. Such DispVM will still not have
persistent storage (as any other DispVM).
Tests included.
QubesOS/qubes-issues#2974
* services:
tests: check clockvm-related handlers
doc: include list of extensions
qubesvm: fix docstring
ext/services: move exporting 'service.*' features to extensions
app: update handling features/service os ClockVM
Since qubesd properly handle chained startup of sys-net->sys-firewall
etc, we don't need a separate service to start netvm explicitly earlier.
FixesQubesOS/qubes-issues#2533
clock synchronization mechanism rewritten to use systemd-timesync instead of NtpDate; at the moment, requires:
- modifying /etc/qubes-rpc/policy/qubes.GetDate to redirect GetDate to designated clockvm
- enabling clocksync service in clockvm ( qvm-features clockvm-name service/clocksync true )
Works as specified in issue listed below, except for:
- each VM synces with clockvm after boot and every 6h
- clockvm synces time with the Internet using systemd-timesync
- dom0 synces itself with clockvm every 1h (using cron)
fixesQubesOS/qubes-issues#1230
This ease Admin API administration, and also adds checking if qrexec
policy + scripts matches actual Admin API methods implementation.
The idea is to classify every Admin API method as either local
read-only, local read-write, global read-only or global read-write.
Where local/global means affecting a single VM, or the whole system.
See QubesOS/qubes-issues#2871 for details.
FixesQubesOS/qubes-issues#2871
Make qubes.NotifyTools reuse logic of qubes.FeaturesRequest, then move
actual request processing to 'features-request' event handler. At the
same time implement handling 'qrexec' and 'gui' features request -
allowing to set template features when wasn't already there.
Behavior change: template is no longer allowed to change feature value
(regardless of being True or False). This means the user will always be
able to override what template have set.
Install files in /etc/qubes-rpc for all methods defined in API
documentation, even if not yet implemented (qubesd will handle it
raising appropriate exception).
Use minimal program written in C (qubesd-query-fast), instead of
qubesd-query in python for performance reasons:
- a single qubesd-query run: ~300ms
- equivalent in shell (echo | nc -U): ~40ms
- qubesd-query-fast: ~20ms
Many tools makes multiple API calls, so performance here do matter. For
example qvm-ls (from VM) currently takes about 60s on a system with 24
VMs.
Also make use of `$include:` directive in policy file, to make it easier
defining a VM with full Admin API access.
QubesOS/qubes-issues#853
This service currently does more harm (desync libvirt state with
reality) than good. Since we have qubesd, we may come back to
implementing it properly using libvirt events.
It doesn't really make sense to keep man pages in separate package.
Previously it was done to avoid some build dependencies (pandoc require
a lot of them), but it isn't a problem anymore.
This all either have been migrated to core3, or is not needed anymore.
There is still qvm-tools directory with a few tools that needs to be
migrated, or installed as is.