Libvirt do not show actual block device (loop*) choosen for the device -
only original (file) path. But file path is available in device
description. Please note that VM can provide any description (withing
allowed limits), effectively breaking this check again (hidding the
attachment status). But even without this bug it could do that - by
hidding the whole device from QubesDB.
FixesQubesOS/qubes-issues#2453
When calling _register_watches() multiple times for dom0 (by passing
None or since 7e9c816b by passing the corresponding libvirt domain) the
check was missing if there is already a QubesDB in _qdb. Therefore a new
QubesDB was created and the old one is destroyed by the GC. As a
consequence the watch_fd is closed but the libvirt event handle for this
fd is still registered. So when libvirt calls poll() it returns
immediately POLLNVAL with the closed fd. This is not caught in libvirt
and the callback is called as if an event happened. _qdb_handler() now
calls read_watch() on the new fd for dom0 and thereby hangs the thread.
This leads (at leads) to qubes-manager to miss VM status updates and
block device events.
FixesQubesOS/qubes-issues#2178
- for netvm it doesn't make sense, but instead of removing it (which
surely will break some code), make it always False
- when settings VM connections, uses_default_netvm is already loaded
- handle it properly during backup restore (really use default netvm,
istead of assuming it's the same as during backup)
Libvirt reports dom0 as "Domain-0". Which is incompatible with how Qubes
and libxl toolstack names it ("dom0"). So handle this as a special case.
Otherwise reconnection retries leaks event object every iteration.
FixesQubesOS/qubes-issues#860
Thanks @alex-mazzariol for help with debugging!
Make sure that even compromised frontend will be cut of (possibly
sensitive - like a webcam) device. On the other hand, if backend domain
is already compromised, it may already compromise frontend domain too,
so none of them would be better to call detach to.
QubesOS/qubes-issues#531
This requires having at least 1GB free on /tmp, but it is fair
assumption - it's tmpfs in dom0 and while performing the backup most of
the VMs aren't running, so shouldn't be a problem. Anyway it is always
possible to set TMPDIR variable or pass --tmpdir cmdline option.
Using tmpfs based temporary directory should speedup the backup.
QubesOS/qubes-issues#1652
qhost.measure_cpu_usage expects the qvm_collection as parameter. Also
the number of vcpus of dom0 seems to be 0, leading to a div by 0. A more
complete fix would probably involve e.g. a new num_cores property which
would contain number of vcpu for vhosts and number of actual cores for
dom0.
For now this is a partial solution.
The following list is bollocks. There were many, many more.
Conflicts:
core-modules/003QubesTemplateVm.py
core-modules/005QubesNetVm.py
core/qubes.py
core/storage/__init__.py
core/storage/xen.py
doc/qvm-tools/qvm-pci.rst
doc/qvm-tools/qvm-prefs.rst
qubes/tools/qmemmand.py
qvm-tools/qvm-create
qvm-tools/qvm-prefs
qvm-tools/qvm-start
tests/__init__.py
vm-config/xen-vm-template-hvm.xml
This commit took 2 days (26-27.01.2016) and put our friendship to test.
--Wojtek and Marek
It should be created at VM creation time (or template changes commit).
But for example for HVM templates created before implementing
QubesOS/qubes-issues#1573, there would be no such image. So create it
when needed, just before VM startup
FixesQubesOS/qubes-issues#1602
We still can't support running HVM template and its VMs simultaneously
(easily), but still, have root-cow.img handled for HVM template, to
allow qvm-revert-template-changes.
FixesQubesOS/qubes-issues#1573
- QubesVmStorage provides now a default get_config_params() method which should
be enough for all possible Storage implementations.
- When writing a custom Storage implementation, one has just to reimplement the
following methods:
* root_dev_config()
* private_dev_config()
* volatile_dev_config()
- QubesVmStorage provides a default implementation of other_dev_config(),
because it can be shared by all storage implementations
* qubesos/pr/12:
Fix circular deps workaround in Pool.vmdir_path()
Move device names from XenStorage to QubesVmStorage
Provide method format_disk_dev() to all storages
Move the vmdir logic from XenPool to Pool
The method XenStorage._format_disk_dev() generates the xml config for a device.
It is not specific to the Xen file storage implementation. It can and must be
reused by other storage implementations
If SendWorker queue is full, check if that thread is still alive.
Otherwise it would deadlock on putting an entry to that queue.
This also requires that SendWorker must ensure that the main thread
isn't currently waiting for queue space when it fails. We can do this by
simply removing an entry from a queue - so on the next iteration
SendWorker would be already dead and main thread would notice it.
Getting an entry from queue in such (error) situation is harmless,
because other checks will notice it's an error condition.
FixesQubesOS/qubes-issues#1359