We use only one device-mapper layer for HVMs, and this isn't the same as
for PV - it is that one, which PV does in initramfs.
Device-mapper layers summary for template-based VMs:
PV: root.img+root-cow.img (dom0) -> xvda, xvda+volatile.img (VM)
HVM: root.img+volatile.img (dom0)
Remove artificial attribute '_start_guid_first' and use
guiagent_installed directly. This way starting guid for stubdom in debug
mode, even if guiagent_installed is set is much clearer.
This allows to assign PCI device to the VM, even if it doesn't support
proper reset. The default behaviour (when the value is True) is to not
allow such attachment (VM will not start if such device is assigned).
Require libvirt patch for this option.
When this option is used, the user probably already got that message.
Also some internal scripts are using this (for example template
pre-uninstall script).
Conflicts:
qvm-tools/qvm-remove
That parameter will be used later to request memory from qmemman just
before loading savefile to memory, so it should match the real need.
Do not allow values smaller than 400, to prevent storing some erroneous
values.
Fixesqubesos/qubes-issues#973
Using QID for DispVM ID was a bad idea in terms of anonymity:
1. It gives some clue about VMs count in the system. In case of large
numbers, this can be quite unique.
2. If new DispVM is started just after closing previous one, it will get
the same ID, and in consequence the same IP. In case of using TorVM,
this leads to use the same circuit as just closed DispVM.
Fixesqubesos/qubes-issues#983
If desired netvm presence is different than during savefile creation(*),
defer setting the netvm until new DispVM is running - otherwise kernel
there will not notice the change and will either have (not working)
'eth0' when it shouldn't, or will not have it while it should.
Additionally set dispvm.uses_default_netvm = False, so GUI tools will
display actual netvm value.
(*) Actually compare to netvm set for dispvm template (`TEMPLATE-dvm`
VM), which can be different if user just changed that but not
regenerated dispvm savefile yet.
Fixesqubesos/qubes-issues#985
Related to qubesos/qubes-issues#862
At least to have there info about its backup.
This was already done in commit
dc6fd3c8f3, but later was erroneously
reverted during migration to libvirt.
Fixesqubesos/qubes-issues#958
In some cases qvm-sync-clock can take a long time (for example in case
of network problems, or when some do not responds). This can lead to
multiple qvm-sync-clock hanging for the same reason (blocking vchan
resources). To prevent that create a lock file and simply abort when one
instance is already running.
Luckily it is used as argument to commands with does not allow any
harmful arguments (virsh set(max)mem). Also usage in arithmetic
expression does not allow any harmful usage in this place.
This can happen when initially there was no default netvm, some domain
was started, then default netvm was set and started - then
netvm.connected_vms will contain domains which aren't really connected
there.
Especially this was happening in firstboot.