Allow using default feature value from netvm, not template. This makes
sense for network-related features like using tor, supporting ipv6 etc.
Similarly to check_with_template, expose it also on Admin API.
Having both default_netvm and default_fw_netvm cause a lot of confusion,
because it isn't clear for the user which one is used when. Additionally
changing provides_network property may also change netvm property, which
may be unintended effect. This as a whole make it hard to:
- cover all netvm-changing actions with policy for Admin API
- cover all netvm-changing events (for example to apply the change to
the running VM, or to check for netvm loops)
As suggested by @qubesuser, kill the default_fw_netvm property and
simplify the logic around it.
Since we're past rc1, implement also migration logic. And add tests for
said migration.
FixesQubesOS/qubes-issues#3247
* qubesos/pr/166:
create "lvm" pool using rootfs thin pool instead of hardcoding qubes_dom0-pool00
change default pool code to be fast
cache PropertyHolder.property_list and use O(1) property name lookups
remove unused netid code
cache isinstance(default, collections.Callable)
don't access netvm if it's None in visible_gateway/netmask
There were many cases were the check was missing:
- changing default_netvm
- resetting netvm to default value
- loading already broken qubes.xml
Since it was possible to create broken qubes.xml using legal calls, do
not reject loading such file, instead break the loop(s) by setting netvm
to None when loop is detected. This will be also useful if still not all
places are covered...
Place the check in default_netvm setter. Skip it during qubes.xml loading
(when events_enabled=False), but still keep it in setter, to _validate_ the
value before any property-* event got fired.
If HVM have PCI device, it can't use PoD, so need 'maxmem' memory to be
started. Request that much from qmemman.
Note that is is somehow independent of enabling or not dynamic memory
management for the VM (`service.meminfo-writer` feature). Even if VM
initially had assigned maxmem memory, it can be later ballooned down.
QubesOS/qubes-issues#3207
* 20171107-tests-backup-api-misc:
test: make race condition on xterm close less likely
tests/backupcompatibility: fix handling 'internal' property
backup: fix handling target write error (like no disk space)
tests/backupcompatibility: drop R1 format tests
backup: use offline_mode for backup collection
qubespolicy: fix handling '$adminvm' target with ask action
app: drop reference to libvirt object after undefining it
vm: always log startup fail
api: do not log handled errors sent to a client
tests/backups: convert to new restore handling - using qubesadmin module
app: clarify error message on failed domain remove (used somewhere)
Fix qubes-core.service ordering
The previous version did not ensure that the stopped/shutdown event was
handled before a new VM start. This can easily lead to problems like in
QubesOS/qubes-issues#3164.
This improved version now ensures that the stopped/shutdown events are
handled before a new VM start.
Additionally this version should be more robust against unreliable
events from libvirt. It handles missing, duplicated and delayed stopped
events.
Instead of one 'domain-shutdown' event there are now 'domain-stopped'
and 'domain-shutdown'. The later is generated after the former. This way
it's easy to run code after the VM is shutdown including the stop of
it's storage.
If there was some netvm set, unset it first (same as with ordinary set).
Otherwise it will try to attach new netvm without detaching the old one
first.
vm.create_qdb_entries can be called multiple times - for example when
changing VM IP. Move starting qdb watcher to start(). And just in case,
cleanup old watcher (if still exists) before starting new one.
This fixes one FD leak.
Catch exception there and log it. Otherwise asyncio complains about not
retrieved exception. There is no one else to handle this exception,
because shutdown event is triggered from libvirt, not any Admin API.
Allow to get domain class as a property, not using admin.vm.List call.
This makes it unnecessary to call admin.vm.List on the client side to
construct wrapper object.
There is intentionally no default template in terms of qubes.property
definition, to not cause problems when switching global default_template
property - like breaking some VMs, or forcing the user to shutdown all
of them for this. But this also means it shouldn't be allowed to reset
template to "default" value, because it will result in a VM without
template at all.
FixesQubesOS/qubes-issues#3115
If VM startup failed before starting anything (even in paused state),
there will be no further event, not even domain-shutdown. This makes it
hard for event-listening applications (like domains tray) to account
domain state. Fix this by emiting domain-start-failed event in every
case of failed startup after emiting domain-pre-start.
Related QubesOS/qubes-issues#3100
* qubesos/pr/150:
qubes/tests: moar fixes
test-packages: add missing libvirt classes
qubes/tests: do not deadlock on .drain()
qubes/vm: put name= first in __repr__
tests: fix some memory leaks
tests: complain about memory leaks
tests: use one event loop and one libvirtaio impl
'dispvm_allowed' name was confusing, because it suggested being able to
spawn new DispVMs, not being a template for DispVM.
FixesQubesOS/qubes-issues#3047
Clone properties from DispVM template after setting base properties
(qid, name, uuid). This means we can use standard clone_properties()
function. Otherwise various setters may fail - for example
netvm setter require uuid property initialized (for VM lookup in VM
collection).
Also, make dispvm_allowed check more robust - include direct creation of
DispVM, and also check just before VM startup (if property was changed
in the meantime).
FixesQubesOS/qubes-issues#3057
This is useful to select default DispVM template for VMs started
directly by the user. This makes sense as long as AdminVM == GUIVM.
QubesOS/qubes-issues#2974
Add auto_cleanup property, which remove DispVM after its shutdown
- this is to unify DispVM handling - less places needing special
handling after DispVM shutdown.
New DispVM inherit all settings from respective AppVM. Move this from
classmethod `DispVM.from_appvm()`, to DispVM constructor. This unify
creating new DispVM with any other VM class.
Notable exception are attached devices - because only one running VM can
have a device attached, this would prevent second DispVM started from
the same AppVM. If one need DispVM with some device attached, one can
create DispVM with auto_cleanup=False. Such DispVM will still not have
persistent storage (as any other DispVM).
Tests included.
QubesOS/qubes-issues#2974
* services:
tests: check clockvm-related handlers
doc: include list of extensions
qubesvm: fix docstring
ext/services: move exporting 'service.*' features to extensions
app: update handling features/service os ClockVM
* tests-storage:
tests: register libvirt events
tests: even more agressive cleanup in tearDown
app: do not wrap libvirt_conn.close() in auto-reconnect wrapper
api: keep track of established connections
tests: drop VM cleanup from tearDownClass, fix asyncio usage in tearDown
storage: fix Storage.clone and Storage.clone_volume
tests: more tests fixes
firewall: raise ValueError on invalid hostname in dsthost=
qmemman: don't load qubes.xml
tests: fix AdminVM test
tests: create temporary files in /tmp
tests: remove renaming test - it isn't supported anymore
tests: various fixes for storage tests
tests: fix removing LVM volumes
tests: fix asyncio usage in some tests
tests: minor fixes to api/admin tests
storage/file: create -cow.img only when needed
storage: move volume_config['source'] filling to one place
app: do not create 'default' storage pool
app: add missing setters for default_pool* global properties
Don't set 'source' volume in various places (each VM class constructor
etc), do it as part of volume initialization. And when it needs to be
re-calculated, call storage.init_volume again.
This code was duplicated, and as usual in such a case, those copies
were different - one have set 'size', the other one not.
QubesOS/qubes-issues#2256
Since we have app.default_pool* properties, create appropriately named
pool and let those properties choose the right pool. This also means we
don't need to specify pool name in default volume config anymore
QubesOS/qubes-issues#2256
* tests-fixes-1:
api: extract function to make pylint happy
tests/vm: simplify AppVM storage test
storage: do not use deepcopy on volume configs
api: cleanup already started servers when some later failed
tests: fix block devices tests when running on real system
tests: fix some FD leaks
Specify empty 'source' field, so it gets filled with appropriate
template's images. Then also fix recursive 'source' handling - DispVM
root volume should point at TemplateVM's root volume as a source, not a
AppVM's one - which is also only a snapshot.
FixesQubesOS/qubes-issues#2896
This code is unused now. Theoretically this is_outdated implementation
should be moved to FileVolume, but since we don't have VM reference
there, it isn't possible to read appropriate xenstore entry. As we're
phasing out file pool, simply ignore it for now.
QubesOS/qubes-issues#2256
Those functions are coroutines anyway, so allow event handlers to be
too.
Some of this (`domain-create-on-disk`, `domain-remove-from-disk`) will
be useful for appmenus handling.
This will allow starting processes and calling RPC services in those
events. This if required for usb devices, which are attached using RPC
services.
Intentionally keep device listing events synchronous only - to
discourage putting long-running actions there.
This change also require some not-async attach method version for
loading devices from qubes.xml - have `load_persistent` for this.
Always define those properties, always include them in volume config.
Also simplify overriding pool based on volume type defined by those:
override pool unless snap_on_start=True.
QubesOS/qubes-issues#2256
Since libvirt do provide object for dom0 too, return it here.
It's much easier than special-casing AdminVM everywhere. And in fact
sometimes it is actually useful (for example attaching devices from/to
dom0, adjusting memory).
We've decided to make VM name immutable. This is especially important
for Admin API, where some parts (especially policy) are sticked to the
VM name.
Now, to rename the VM, one need to clone it under new name (thanks to
LVM, this is very quick action), then remove the old one.
FixesQubesOS/qubes-issues#2868
* qubesos/pr/111:
vm: drop 'internal' property
qmemman: make sure to release lock
qmemman: fix meminfo parsing for python 3
devices: drop 'data' and 'frontend_domain' fields, rename 'devclass' to 'bus'
* qubesos/pr/110:
storage: use direct object references, not only identifiers
vm: fix volume_config
storage/lvm: prefix VM LVM volumes with 'vm-'
storage: fix VM rename
Reference objects, not their IDs - this way when object is modified, it
is visible everywhere where it is used. Main changes:
- volume.pool - Pool object
- volume.source - Volume object
Since volume have Pool object reference now, move volume related
functions into Volume class (from Pool class). This avoids horrible
`storage.get_pool(volume).something(volume)` construct.
One issue here is since volume.source reference a Volume object from a
different VM - VM's template, now VM load order is important. Since we
don't have control over it, initialize vm.storage when needed - possibly
while initializing storage of different VM. Since we don't have cycles
in AppVM-TemplateVM dependencies, it is safe.
Also, since this commit, volume.source (if defined) always points at
volume of the same name from VM's template. Using volumes with something
else as a source is no longer supported.
QubesOS/qubes-issues#2256
- kernel volume shouldn't have snap_on_start, it's read-only volume
anyway
- root volume of AppVM should have placeholder for 'source'
- private volume of AppVM should _not_ have placeholder for 'source'
(it's ignored anyway, because snap_on_start=False)
QubesOS/qubes-issues#2256
With libvirt in place, this isn't enough - libvirt also keep VM
configuration in its memory and adjusting xenstore doesn't change that.
In fact changing xenstore behind it back make it even worse in some
situations.
QubesOS/qubes-issues#1426
Re-init volume config of all 'snap_on_start' volumes at template
chanage. For this, save original volume config and re-use
config_volume_from_source function introduced in previous commit.
At the same time, forbid changing template of running AppVM or any
DispVM.
QubesOS/qubes-issues#2256