Pool.volumes property is implemented in a base class, individual drivers
should provide list_volumes() method as a backend for that property.
Fix this in a LinuxKernel pool.
Similar to admin.vm.volume.List. There are still other
admin.pool.volume.* methods missing, but lets start with just this one.
Those with both pool name and volume id arguments may need some more
thoughts.
Don't realy on a volume configuration only, it's easy to miss updating
it. Specifically, import_volume() function didn't updated the size based
on the source volume.
The size that the actual VM sees is based on the
file size, and so is the filesystem inside. Outdated size property can
lead to a data loss if the user perform an action based on a incorrect
assumption - like extending size, which actually will shrink the volume.
FixesQubesOS/qubes-issues#4821
Handle this case specifically, as way too many users ignore the message
during installation and complain it doesn't work later.
Name the problem explicitly, instead of pointing at libvirt error log.
FixesQubesOS/qubes-issues#4689
For system site-packages to work, it needs to be the same as in the base
distribution. Switch to the version in Ubuntu bionic
QubesOS/qubes-issues#4613
admin.pool.UsageDetails reports the usage data, unlike
admin.pool.Info, which should report the config/unchangeable data.
At the moment admin.Pool.Info still reports usage, to maintain
compatibility, but once all relevant tools are updated,
it should just return configuration data.
Added usage_details method to Pool class
(returns a dictionary with detailed information
on pool usage) and LVM implementation that returns
metadata info.
Needed for QubesOS/qubes-issues#5053
* origin/pr/287:
app: fix docstrings PEP8 refactor
tests: remove iptables_header content in test_622_qdb_keyboard_layout
tests: add test for guivm and keyboard_layout
gui: simplify setting guivm xid and keyboard layout
Make pylint happier
gui: set keyboard layout from feature
Handle GuiVM properties
Make PEP8 happier
Similar to domain-start-failed, add an event fired when
domain-pre-shutdown was fired but the actual operation failed.
Note it might not catch all the cases, as shutdown() may be called with
wait=False, which means it won't wait fot the actual shutdown. In that
case, timeout won't result in domain-shutdown-failed event.
QubesOS/qubes-issues#5380
Move this functionality from our custom runner (qubes.tests.run),
into base test class. This is very useful for correlating logs, so lets
have it with nose2 runner too.
Prevent starting a VM while it's being removed. Something could try to
start a VM just after it's being killer but before removing it (Whonix
example from previous commit is real-life case). The window specifically
is between kill() call and removing it from collection
(`del app.domains[vm.qid]`). Grab a startup_lock for the whole operation
to prevent it.
Check early (but after grabbing a startup_lock) if VM isn't just
removed. This could happen if someone grabs its reference from other
places (netvm of something else?) or just before removing it.
This commit makes the simple removal from the collection (done as the
first step in admin.vm.Remove implementation) efficient way to block
further VM startups, without introducing extra properties.
For this to be effective, removing from the collection, needs to happen
with the startup_lock held. Modify admin.vm.Remove accordingly.
There are cases when destination domain doesn't exist when the call gets
to qubesd. Namely:
1. The call comes from dom0, which bypasses qrexec policy
2. Domain was removed between checking the policy and here
Handle the the same way as if the domain wouldn't exist at policy
evaluation stage either - i.e. refuse the call.
On the client side it doesn't change much, but on the server call it
avoids ugly, useless tracebacks in system journal.
FixesQubesOS/qubes-issues#5105
Allow to manual inspect test environment after test fails. This is
similar to --do-not-clean option we had in R3.2.
The decorator should be used only while debugging and should never be
applied to the code committed into repository.
Domain shutdown handling may take extended amount of time, especially on
slow machine (all the LVM teardown etc). Take care of it by
synchronizing using vm.startup_lock, instead of increasing constant
delay. This way, the shutdown event handler needs to be started within
3s, not finish in this time.
When libvirt daemon is restarted, qubesd attempt to re-connect to the
new instance transparently (through virConnect object wrapper). But the
code lacked re-registering event handlers.
Fix this by adding reconnect callback argument to virConnectWrappper, to
be called after new connection is established. This callback will
additionally get old connection as an argument, if any cleanup is
needed. The old connection is closed just after callback returns.
Use this to re-register event handler, but also unregister old handler
first. While full unregister wont work on since old libvirt daemon
instance is dead already, it will still cleanup client structures.
Since the old libvirt connection is closed now, adjust also domain
reconnection logic, to handle stale connection object. In that case
isAlive() call throws an exception.
FixesQubesOS/qubes-issues#5303