Using QID for DispVM ID was a bad idea in terms of anonymity:
1. It gives some clue about VMs count in the system. In case of large
numbers, this can be quite unique.
2. If new DispVM is started just after closing previous one, it will get
the same ID, and in consequence the same IP. In case of using TorVM,
this leads to use the same circuit as just closed DispVM.
Fixesqubesos/qubes-issues#983
If desired netvm presence is different than during savefile creation(*),
defer setting the netvm until new DispVM is running - otherwise kernel
there will not notice the change and will either have (not working)
'eth0' when it shouldn't, or will not have it while it should.
Additionally set dispvm.uses_default_netvm = False, so GUI tools will
display actual netvm value.
(*) Actually compare to netvm set for dispvm template (`TEMPLATE-dvm`
VM), which can be different if user just changed that but not
regenerated dispvm savefile yet.
Fixesqubesos/qubes-issues#985
Related to qubesos/qubes-issues#862
At least to have there info about its backup.
This was already done in commit
dc6fd3c8f3, but later was erroneously
reverted during migration to libvirt.
Fixesqubesos/qubes-issues#958
In some cases qvm-sync-clock can take a long time (for example in case
of network problems, or when some do not responds). This can lead to
multiple qvm-sync-clock hanging for the same reason (blocking vchan
resources). To prevent that create a lock file and simply abort when one
instance is already running.
Luckily it is used as argument to commands with does not allow any
harmful arguments (virsh set(max)mem). Also usage in arithmetic
expression does not allow any harmful usage in this place.
This can happen when initially there was no default netvm, some domain
was started, then default netvm was set and started - then
netvm.connected_vms will contain domains which aren't really connected
there.
Especially this was happening in firstboot.
Since libvirt do not support such events (at least for libxl driver), we
need some way to notify qubes-manager when device is attached/detached.
Use the same protocol as for connect/disconnect but on the target
domain.
When user logins, login script will try to connect all guid to all the
running VMs. If VMs are still booting at this stage, will never
automatically get its guid (until user tries to start some program
there). This can for example lead to lack of nm-applet icon.
This script is connected directly to calling process, so any output here
will disrupt qrexec service data. For example in case of qubes.OpenInVM
this will be prepended to modified file while sending it back to the
source VM - in case of no modification, it will override that file in
the source VM...
Otherwise it would point at the same object and for example changing
vm.services[] in one VM will change that also for another. That link
will be severed after reloading the VMs from qubes.xml, but at least in
case of DispVM startup its too late - vm.service['qubes-dvm'] is set for
the DispVM template even during normal startup, not savefile preparation.
This allows to specify tight network isolation for a VM, and finally
close one remaining way for leaking traffic around TorVM. Now when VM is
connected to for example TorVM, its DispVMs will be also connected
there.
The new property can be set to:
- default (uses_default_dispvm_netvm=True) - use the same NetVM/ProxyVM as the
calling VM itself - including none it that's the case
- None - DispVMs will be network-isolated
- some NetVM/ProxyVM - will be used, even if calling VM is network-isolated
Closesqubesos/qubes-issues#862