This is very easy if the file/device is in dom0, so do it to avoid
cryptic startup error (`libvirtError: internal error: libxenlight failed
to create domain`).
FixesQubesOS/qubes-issues#1619
We still can't support running HVM template and its VMs simultaneously
(easily), but still, have root-cow.img handled for HVM template, to
allow qvm-revert-template-changes.
FixesQubesOS/qubes-issues#1573
This way even when qrexec agent would timeout on connection, guid will
be already running.
Also use new -K guid option to terminate stubdom guid when the real guid
is connected (unless in debug mode - then both guid will be running).
We use only one device-mapper layer for HVMs, and this isn't the same as
for PV - it is that one, which PV does in initramfs.
Device-mapper layers summary for template-based VMs:
PV: root.img+root-cow.img (dom0) -> xvda, xvda+volatile.img (VM)
HVM: root.img+volatile.img (dom0)
Remove artificial attribute '_start_guid_first' and use
guiagent_installed directly. This way starting guid for stubdom in debug
mode, even if guiagent_installed is set is much clearer.
Define it only when really needed:
- during VM creation - to generate UUID
- just before VM startup
As a consequence we must handle possible exception when accessing
vm.libvirt_domain. It would be a good idea to make this field private in
the future. It isn't possible for now because block_* are external for
QubesVm class.
This hopefully fixes race condition when Qubes Manager tries to access
libvirt_domain (using some QubesVm.*) at the same time as other tool is
removing the domain. Additionally if Qubes Manage would loose that race, it could
define the domain again leaving some unused libvirt domain (blocking
that domain name for future use).
Currently getting Stubdom XID is (the last one?) read directly from
Xenstore as there is no libvirt function for it.
This means that even if HVM is running it can have not connection to
Xenstore. For now give -1 in such situation.
1. Fake dom0 object doesn't need proper maxmem nor vcpus - set
statically to 0 instead of getting from physical host.
2. QubesHVM doesn't preserve maxmem setting, so set it to self.memory
earlier (to suppress default total_memory/2 calculation).
This makes easier to import right objects in submodules (only one
object). This also implement lazy connection - at first access, not at
module import, which speeds up tools, which doesn't need runtime
information (like qvm-prefs or qvm-service). In the future this will
ease migration from xenstore to QubesDB.
Also implement "offline mode" - operate on qubes.xml without connecting
to VMM - raise exception at such try.
This is needed to run tools during installation, where only minimal
set of services are started, especially no libvirt.
Do not pollute environment of calling process, otherwise all VMs started
from Qubes Manager afterwards will get QREXEC_STARTUP_NOWAIT, which
will cause wait_for_session not working.
Alway start stubdom guid, then if guiagent_installed set - start the
target one and when connects, kill stubdom one. This allow the user to
see startup messages so prevent the impression of hang VM.
Note 1: this doesn't work when VM disables SVGA output (just after
windows boot splash screen).
Note 2: gui-daemon sometimes hangs after receiving SIGTERM (libvchan_wait
during libvchan_close). This looks to be stubdom gui agent problem.