Move this functionality from our custom runner (qubes.tests.run),
into base test class. This is very useful for correlating logs, so lets
have it with nose2 runner too.
Prevent starting a VM while it's being removed. Something could try to
start a VM just after it's being killer but before removing it (Whonix
example from previous commit is real-life case). The window specifically
is between kill() call and removing it from collection
(`del app.domains[vm.qid]`). Grab a startup_lock for the whole operation
to prevent it.
Check early (but after grabbing a startup_lock) if VM isn't just
removed. This could happen if someone grabs its reference from other
places (netvm of something else?) or just before removing it.
This commit makes the simple removal from the collection (done as the
first step in admin.vm.Remove implementation) efficient way to block
further VM startups, without introducing extra properties.
For this to be effective, removing from the collection, needs to happen
with the startup_lock held. Modify admin.vm.Remove accordingly.
There are cases when destination domain doesn't exist when the call gets
to qubesd. Namely:
1. The call comes from dom0, which bypasses qrexec policy
2. Domain was removed between checking the policy and here
Handle the the same way as if the domain wouldn't exist at policy
evaluation stage either - i.e. refuse the call.
On the client side it doesn't change much, but on the server call it
avoids ugly, useless tracebacks in system journal.
FixesQubesOS/qubes-issues#5105
Allow to manual inspect test environment after test fails. This is
similar to --do-not-clean option we had in R3.2.
The decorator should be used only while debugging and should never be
applied to the code committed into repository.
Domain shutdown handling may take extended amount of time, especially on
slow machine (all the LVM teardown etc). Take care of it by
synchronizing using vm.startup_lock, instead of increasing constant
delay. This way, the shutdown event handler needs to be started within
3s, not finish in this time.
When libvirt daemon is restarted, qubesd attempt to re-connect to the
new instance transparently (through virConnect object wrapper). But the
code lacked re-registering event handlers.
Fix this by adding reconnect callback argument to virConnectWrappper, to
be called after new connection is established. This callback will
additionally get old connection as an argument, if any cleanup is
needed. The old connection is closed just after callback returns.
Use this to re-register event handler, but also unregister old handler
first. While full unregister wont work on since old libvirt daemon
instance is dead already, it will still cleanup client structures.
Since the old libvirt connection is closed now, adjust also domain
reconnection logic, to handle stale connection object. In that case
isAlive() call throws an exception.
FixesQubesOS/qubes-issues#5303
Qubesd wrongly required default_template global property to be not None.
Furthermore, even without hard failure set, require_property method
raised an exception in case of a property having incorrect None value.
It now logs an error message instead, as designed.
fixesQubesOS/qubes-issues#5326
This fixes an invalid response generated by get_timezone when the time zones are composed by 3 parts, for example:
America/Argentina/Buenos_Aires
America/Indiana/Indianapolis
Update utils.py
Give raw cpu_time value, instead of normalized one (to number of vcpus),
as documented.
Move the normalization to cpu_usage calculation. At the same time, add
cpu_usage_raw without it, if anyone needs it.
QubesOS/qubes-issues#4531
* origin/pr/273:
tests: check importing empty data into ReflinkVolume
tests: check importing empty data into ThinVolume
tests: check importing empty data into FileVolume
tests: improve cleanup after LVM tests
During regular VM shutdown, the VM should sync() anyway. (And
admin.vm.volume.Import does fdatasync(), which is also fine.) But let's
be extra careful.
This is needed as a consequence of d8b6d3ef ("Make add_pool/remove_pool
coroutines, allow Pool.{setup,destroy} as coroutines"), but there hasn't
been any problem so far because no storage driver implemented pool
setup() as a coroutine.