* tests-20181223:
tests: drop expectedFailure from qubes_desktop_run test
tests: grub in HVM qubes
tests: update dom0_update for new updates available flag
tests: regression test LVM listing code
tests/extra: wrap ProcessWrapper.wait() to be asyncio-aware
tests: adjust backupcompat for new maxmem handling
- Two new methods: .features.check_with_adminvm() and
.check_with_template_and_adminvm(). Common code refactored.
- Two new AdminAPI calls to take advantage of the methods:
- admin.vm.feature.CheckWithAdminVM
- admin.vm.feature.CheckWithTemplateAndAdminVM
- Features manager moved to separate module in anticipation of features
on app object in R5.0. The attribute Features.vm renamed to
Features.subject.
- Documentation, tests.
Handle 'os' feature - if it's Windows, then set rpc-clipboard feature.
Handle 'gui-emulated' feature - request for specifically stubdomain GUI.
With 'gui' feature it is only possible to enable gui-agent based on, or
disable GUI completely.
Handle 'default-user' - verify it for weird characters and set
'default_user' property (if wasn't already set).
QubesOS/qubes-issues#3585
This enable two things:
1. Follow global clockvm setting, without adjusting qrexec policy.
2. Avoid starting clockvm by arbitrary VM.
FixesQubesOS/qubes-issues#3588
This adds the file-reflink storage driver. It is never selected
automatically for pool creation, especially not the creation of
'varlibqubes' (though it can be used if set up manually).
The code is quite small:
reflink.py lvm.py file.py + block-snapshot
sloccount 334 lines 447 (134%) 570 (171%)
Background: btrfs and XFS (but not yet ZFS) support instant copies of
individual files through the 'FICLONE' ioctl behind 'cp --reflink'.
Which file-reflink uses to snapshot VM image files without an extra
device-mapper layer. All the snapshots are essentially freestanding;
there's no functional origin vs. snapshot distinction.
In contrast to 'file'-on-btrfs, file-reflink inherently avoids
CoW-on-CoW. Which is a bigger issue now on R4.0, where even AppVMs'
private volumes are CoW. (And turning off the lower, filesystem-level
CoW for 'file'-on-btrfs images would turn off data checksums too, i.e.
protection against bit rot.)
Also in contrast to 'file', all storage features are supported,
including
- any number of revisions_to_keep
- volume.revert()
- volume.is_outdated
- online fstrim/discard
Example tree of a file-reflink pool - *-dirty.img are connected to Xen:
- /var/lib/testpool/appvms/foo/volatile-dirty.img
- /var/lib/testpool/appvms/foo/root-dirty.img
- /var/lib/testpool/appvms/foo/root.img
- /var/lib/testpool/appvms/foo/private-dirty.img
- /var/lib/testpool/appvms/foo/private.img
- /var/lib/testpool/appvms/foo/private.img@2018-01-02T03:04:05Z
- /var/lib/testpool/appvms/foo/private.img@2018-01-02T04:05:06Z
- /var/lib/testpool/appvms/foo/private.img@2018-01-02T05:06:07Z
- /var/lib/testpool/appvms/bar/...
- /var/lib/testpool/appvms/...
- /var/lib/testpool/template-vms/fedora-26/...
- /var/lib/testpool/template-vms/...
It looks similar to a 'file' pool tree, and in fact file-reflink is
drop-in compatible:
$ qvm-shutdown --all --wait
$ systemctl stop qubesd
$ sed 's/ driver="file"/ driver="file-reflink"/g' -i.bak /var/lib/qubes/qubes.xml
$ systemctl start qubesd
$ sudo rm -f /path/to/pool/*/*/*-cow.img*
If the user tries to create a fresh file-reflink pool on a filesystem
that doesn't support reflinks, qvm-pool will abort and mention the
'setup_check=no' option. Which can be passed to force a fallback on
regular sparse copies, with of course lots of time/space overhead. The
same fallback code is also used when initially cloning a VM from a
foreign pool, or from another file-reflink pool on a different
mountpoint.
'journalctl -fu qubesd' will show all file-reflink copy/rename/remove
operations on VM creation/startup/shutdown/etc.
When some expiring rules are present, it is necessary to reload firewall
when those rules expire. Previously systemd timer was used to trigger
this action, but since we have own daemon now, it isn't necessary
anymore - use this daemon for that.
Additionally automatically removing expired rules was completely broken
in R4.0.
FixesQubesOS/qubes-issues#1173
* qubesos/pr/180:
vm/qubesvm: default to PVH unless PCI devices are assigned
vm/qubesvm: expose 'start_time' property over Admin API
vm/qubesvm: revert backup_timestamp to '%s' format
doc: link qvm-device man page for qvm-block, qvm-pci, qvm-usb
Test base functions of dom0 module (creating VM, setting property) and
configuring system inside of VM (through DispVM). The later is done for
each available template (the process use salt installed in that
template, not copied from dom0).
QubesOS/qubes-issues#3316
This qrexec is meant for services, which require some kind of
"registering" before use. After registering, the backend should invoke
this call with frontend as the intended destination, with the actual
service in argument of this call and the argument as the payload.
By default this qrexec is disabled by policy.
Signed-off-by: Wojtek Porczyk <woju@invisiblethingslab.com>
Add auto_cleanup property, which remove DispVM after its shutdown
- this is to unify DispVM handling - less places needing special
handling after DispVM shutdown.
New DispVM inherit all settings from respective AppVM. Move this from
classmethod `DispVM.from_appvm()`, to DispVM constructor. This unify
creating new DispVM with any other VM class.
Notable exception are attached devices - because only one running VM can
have a device attached, this would prevent second DispVM started from
the same AppVM. If one need DispVM with some device attached, one can
create DispVM with auto_cleanup=False. Such DispVM will still not have
persistent storage (as any other DispVM).
Tests included.
QubesOS/qubes-issues#2974
* services:
tests: check clockvm-related handlers
doc: include list of extensions
qubesvm: fix docstring
ext/services: move exporting 'service.*' features to extensions
app: update handling features/service os ClockVM
Since qubesd properly handle chained startup of sys-net->sys-firewall
etc, we don't need a separate service to start netvm explicitly earlier.
FixesQubesOS/qubes-issues#2533
clock synchronization mechanism rewritten to use systemd-timesync instead of NtpDate; at the moment, requires:
- modifying /etc/qubes-rpc/policy/qubes.GetDate to redirect GetDate to designated clockvm
- enabling clocksync service in clockvm ( qvm-features clockvm-name service/clocksync true )
Works as specified in issue listed below, except for:
- each VM synces with clockvm after boot and every 6h
- clockvm synces time with the Internet using systemd-timesync
- dom0 synces itself with clockvm every 1h (using cron)
fixesQubesOS/qubes-issues#1230
This ease Admin API administration, and also adds checking if qrexec
policy + scripts matches actual Admin API methods implementation.
The idea is to classify every Admin API method as either local
read-only, local read-write, global read-only or global read-write.
Where local/global means affecting a single VM, or the whole system.
See QubesOS/qubes-issues#2871 for details.
FixesQubesOS/qubes-issues#2871