When VM got some memory assigned, balloon driver may not pick it up
immediatelly and the memory will still be seen as "free" by Xen, but VM
can use (request) it at any time. Qmemman needs to take care of such
memory (exclude it from "free" pool), otherwise it would redistribute it
to other domains, allowing the original domain to drain Xen memory pool.
Do this by redefining DomainState.memory_actual - it is now amount of
memory available to the VM (currently used, or possibly used). Then
calculate free memory by subtracting memory allocated but not used
(memory_target-memory_current).
FixesQubesOS/qubes-issues#1389
In some (most) cases VM needs to be started to complete resize
operation. This may be unexpected, so make it clear and do not start the
VM when the user did not explicitly allow that.
FixesQubesOS/qubes-issues#1268
When /var/lib/qubes/appvms is a mount point of ext4 filesystem, there
will be already 'lost+found' directory. Avoid this conflict.
FixesQubesOS/qubes-issues#1440
The comment "calling VM have the same netvm" doesn't apply to:
- DispVM started from dom0 menu
- DispVM started from a VM with `dispvm_netvm` property modified
FixesQubesOS/qubes-issues#1334
systemd-user-sessions.service is specicically for that, do not use hack
(plymouth-quit.service), which doesn't work when the service is
disabled.
FixesQubesOS/qubes-issues#1250
- QubesVmStorage provides now a default get_config_params() method which should
be enough for all possible Storage implementations.
- When writing a custom Storage implementation, one has just to reimplement the
following methods:
* root_dev_config()
* private_dev_config()
* volatile_dev_config()
- QubesVmStorage provides a default implementation of other_dev_config(),
because it can be shared by all storage implementations
When VM is shutting down it doesn't disconnect PCI frontend (?), so when
VM is destroyed it ends up in timeouts in PCI backend shutdown (which
can't communicate with frontend at that stage). Prevent this by
detaching PCI devices while VM is still running.
FixesQubesOS/qubes-issues#1494FixesQubesOS/qubes-issues#1425
* qubesos/pr/12:
Fix circular deps workaround in Pool.vmdir_path()
Move device names from XenStorage to QubesVmStorage
Provide method format_disk_dev() to all storages
Move the vmdir logic from XenPool to Pool
Otherwise hotplug scripts may deadlock on qvm-template-commit and
consequently do not release loop and device-mapper devices. Which means
also not releasing disk space for underlying images.
FixesQubesOS/qubes-issues#1458
In some cases it may happen that qmemman or other application using
xenstore will re-create VM directory in xenstore just after VM was
destroyed. For example when multiple VMs was destroyed at the same time,
but qmemman will kick off just at first @releaseDomain event - other VMs
will still be there (at xenstore-list time). This means that qmemman
will consider them when redistributing memory (of just destroyed one),
so will update memory/target entry of every "running" VM. And at this
point it may recreate VM directory of another already destroyed VM.
Generally fixing this race condition would require running all the
operations (from xenstore-ls, to setting memory/target) in a single
xenstore transaction. But this can be lengthly process. And if any other
modification happens in the meantime, transaction will rejected and
qmemman would need to redo all the changes. Not worth the effort.
FixesQubesOS/qubes-issues#1409