If getting memory for new VM fails for any reason, make sure that global
lock will be released. Otherwise qmemman will stop functioning at all.
QubesOS/qubes-issues#1636
Force refreshing domain list after starting new VM, even if the
triggering watch wasn't about domain list change. Otherwise (with
outdated domain list) memory allocated to a new VM, but not yet used may
be redistributed (leaving to little to the Xen itself).
FixesQubesOS/qubes-issues#1389
Retrieve a domain list only after obtaining global lock. Otherwise an
outdated list may be used, when a domain was introduced in the meantime
(starting a new domain is done with global lock held), leading to #1389.
QubesOS/qubes-issues#1389
In this case, qmemman would not manage to get any memory state for such
VM, so memory_current would be 'None'. But later it would perform
arithmetics on it, which would result in qmemman crash (TypeError
exception).
FixesQubesOS/qubes-issues#1601
When VM got some memory assigned, balloon driver may not pick it up
immediatelly and the memory will still be seen as "free" by Xen, but VM
can use (request) it at any time. Qmemman needs to take care of such
memory (exclude it from "free" pool), otherwise it would redistribute it
to other domains, allowing the original domain to drain Xen memory pool.
Do this by redefining DomainState.memory_actual - it is now amount of
memory available to the VM (currently used, or possibly used). Then
calculate free memory by subtracting memory allocated but not used
(memory_target-memory_current).
FixesQubesOS/qubes-issues#1389
In some cases it may happen that qmemman or other application using
xenstore will re-create VM directory in xenstore just after VM was
destroyed. For example when multiple VMs was destroyed at the same time,
but qmemman will kick off just at first @releaseDomain event - other VMs
will still be there (at xenstore-list time). This means that qmemman
will consider them when redistributing memory (of just destroyed one),
so will update memory/target entry of every "running" VM. And at this
point it may recreate VM directory of another already destroyed VM.
Generally fixing this race condition would require running all the
operations (from xenstore-ls, to setting memory/target) in a single
xenstore transaction. But this can be lengthly process. And if any other
modification happens in the meantime, transaction will rejected and
qmemman would need to redo all the changes. Not worth the effort.
FixesQubesOS/qubes-issues#1409
Forking daemon after initializing hypervisor connection can cause
problems (and actually does in case of libvirt).
To notify systemd when daemon is ready use notify socket (previously it
was termination of parent process).
It is common for both dom0 and VM, and also quite linux-specific
(other OSes will need other implementation). So move to linux-specific
repo (not dom0-specific).