* origin/pr/326:
ext/admin: workaround for extension's __init__() called multiple times
tests: teardown fixes
travis: include core-qrexec in tests
api/admin: (ext/admin) limit listing VMs based on qrexec policy
api/internal: extract get_system_info() function
... during tests.
qubes.ext.Extension class is a weird thing that tries to make each extension
a singleton. But this unfortunately have a side effect that __init__()
is called separately for each "instance" (created in Qubes()'s
__init__()), even though this is really the same object. During normal
execution this isn't an issue, because there is just one Qubes() object
instance. But during tests, multiple objects are created.
In this particular case, it caused PolicyCache() to be created twice and
the second one overriden the first one - without properly cleaning it
up. This leaks a file descriptor (inotify one). The fact that cleanup()
was called twice too didn't helped, because it was really called on
the same object, the one requiring cleanup was already gone.
Workaround this by checking if policy_cache field is initialize and
avoid re-initialize it. Also, on Qubes() object cleanup remove that
field, so it can be properly initialized on the next test iteration.
Various Admin API calls, when directed at dom0, retrieve global system
view instead of a specific VM. This applies to admin.vm.List (called at
dom0 retrieve full VM list) and admin.Events (called at dom0 listen for
events of all the VMs). This makes it tricky to configure a management
VM with access to limited set of VMs only, because many tools require
ability to list VMs, and that would return full list.
Fix this issue by adding a filter to admin.vm.List and admin.Events
calls (using event handlers in AdminExtension) that filters the output
using qrexec policy. This version evaluates policy for each VM or event
(but loads only once). If the performance will be an issue, it can be
optimized later.
FixesQubesOS/qubes-issues#5509
* origin/pr/295:
tests: fix tag name in audiovm test
tests: ensure notin while setting Audio/Gui VM
gui: add checks for changing/removing guivm
audio: add checks for changing/removing audiovm
audio/gui: use simply vm.tags instead of list()
tests: fix tests for gui/audio vm
Make pylint happy
gui/audio: fixes from Marek's comments
Allow AudioVM to be ran after any attached qubes
Allow GuiVM to be ran after any attached qubes
xid: ensure vm is not running
tests: fix missing default audiovm and guivm tags
gui, audio: better handling of start/stop guivm/audiovm
gui, audio: ensure guivm and audiovm tag are set
Support for AudioVM
Only first 4 disks can be emulated as IDE disks by QEMU. Specifically,
CDROM must be one of those first 4 disks, otherwise it will be
ignored. This is especially important if one wants to boot the VM from
that CDROM.
Since xvdd normally is a kernel-related volume (boot image, modules) it
makes perfect sense to re-use it for CDROM. It is either set for kernel
volume (in which case, VM should boot from it and not the CDROM), or
(possibly bootable) CDROM.
This needs to be done in two places:
- BlockExtension for dynamic attach
- libvirt xen.xml - for before-boot attach
In theory the latter would be enough, but it would be quite confusing
that device will get different options depending on when it's attached
(in addition to whether the kernel is set - introduced here).
This all also means, xvdd not always is a "system disk". Adjust listing
connected disks accordingly.
Migrate meminfo-writer=False service setting to maxmem=0 as a method to
disable dynamic memory management. Remove the service from vm.features
dict in the process.
Additionally, translate any attempt to set the service.meminfo-writer
feature to either setting maxmem=0 or resetting it to the default (which
is memory balancing enabled if supported by given domain). This is to at
least partially not break existing tools using service.meminfo-writer as
a way to control dynamic memory management. This code does _not_ support
reading service.meminfo-writer feature state to get the current state of
dynamic memory management, as it would require synchronizing with all
the factors affecting its value. One of main reasons for migrating to
maxmem=0 approach is to avoid the need of such synchronization.
QubesOS/qubes-issues#4480
Use maxmem=0 for disabling dynamic memory balance, instead of cryptic
service.meminfo-writer feature. Under the hood, meminfo-writer service
is also set based on maxmem property (directly in qubesdb, not
vm.features dict).
Having this as a property (not "feature"), allow to have sensible
handling of default value. Specifically, disable it automatically if
otherwise it would crash a VM. This is the case for:
- domain with PCI devices (PoD is not supported by Xen then)
- domain without balloon driver and/or meminfo-writer service
The check for the latter is heuristic (assume presence of 'qrexec' also
can indicate balloon driver support), but it is true for currently
supported systems.
This also allows more reliable control of libvirt config: do not set
memory != maxmem, unless qmemman is enabled.
memory != maxmem only makes sense if qmemman for given domain is
enabled. Besides wasting some domain resources for extra page tables
etc, for HVM domains this is harmful, because maxmem-memory difference
is made of Popupate-on-Demand pool, which - when depleted - will kill
the domain. This means domain without balloon driver will die as soon
as will try to use more than initial memory - but without balloon driver
it sees maxmem memory and doesn't know about the lower limit.
FixesQubesOS/qubes-issues#4135
It makes a lot of sense to call long-running operations in that event
handler, including calling back into the VM. Allow that by using
fire_event_async, not just fire_event.
Also, document the event.