When VM is shutting down it doesn't disconnect PCI frontend (?), so when
VM is destroyed it ends up in timeouts in PCI backend shutdown (which
can't communicate with frontend at that stage). Prevent this by
detaching PCI devices while VM is still running.
FixesQubesOS/qubes-issues#1494FixesQubesOS/qubes-issues#1425
* qubesos/pr/12:
Fix circular deps workaround in Pool.vmdir_path()
Move device names from XenStorage to QubesVmStorage
Provide method format_disk_dev() to all storages
Move the vmdir logic from XenPool to Pool
Otherwise hotplug scripts may deadlock on qvm-template-commit and
consequently do not release loop and device-mapper devices. Which means
also not releasing disk space for underlying images.
FixesQubesOS/qubes-issues#1458
In some cases it may happen that qmemman or other application using
xenstore will re-create VM directory in xenstore just after VM was
destroyed. For example when multiple VMs was destroyed at the same time,
but qmemman will kick off just at first @releaseDomain event - other VMs
will still be there (at xenstore-list time). This means that qmemman
will consider them when redistributing memory (of just destroyed one),
so will update memory/target entry of every "running" VM. And at this
point it may recreate VM directory of another already destroyed VM.
Generally fixing this race condition would require running all the
operations (from xenstore-ls, to setting memory/target) in a single
xenstore transaction. But this can be lengthly process. And if any other
modification happens in the meantime, transaction will rejected and
qmemman would need to redo all the changes. Not worth the effort.
FixesQubesOS/qubes-issues#1409
The method XenStorage._format_disk_dev() generates the xml config for a device.
It is not specific to the Xen file storage implementation. It can and must be
reused by other storage implementations
If SendWorker queue is full, check if that thread is still alive.
Otherwise it would deadlock on putting an entry to that queue.
This also requires that SendWorker must ensure that the main thread
isn't currently waiting for queue space when it fails. We can do this by
simply removing an entry from a queue - so on the next iteration
SendWorker would be already dead and main thread would notice it.
Getting an entry from queue in such (error) situation is harmless,
because other checks will notice it's an error condition.
FixesQubesOS/qubes-issues#1359
I had some issue with fstrim and the missing else had caused the code to continue and fail later with a non-descriptive error message. This commit makes the error message more descriptive and helpful.