This makes easier to handle some corner cases. One of them is having
entry without `dir_path` defined. This may happen when migrating from R2
(using backup+restore or in-place) while some DisposableVM was running
(even if not included in the backup itself).
Fixesqubesos/qubes-issues#1124
Reported by @doncohen, thanks @wyory for providing more details.
This is required to create VMs in process of building Live system, where
libvirt isn't running.
Additionally there is no udev in the build environment, so needs to
manually create /dev/loop*p* based on sysfs info.
There were two bugs:
1. Firewall configuration wasn't copied during qvm-clone (it is in
separate file, so now it is included in vm.clone_disk_files).
2. Non-default firewall configuration wasn't stored in qubes.xml. This
means that initially DispVM got proper configuration (inherited from
calling VM), but if anything caused firewall reload (for example
starting another VM), the firewall rules was cleared to default state
(allow all).
Fixesqubesos/qubes-issues#1032
Most of them do not need GUI (especially those started from dom0), so
speed the things up a little (no need to wait for guid). But if some
service will need GUI access, there is "gui" parameter.
We use only one device-mapper layer for HVMs, and this isn't the same as
for PV - it is that one, which PV does in initramfs.
Device-mapper layers summary for template-based VMs:
PV: root.img+root-cow.img (dom0) -> xvda, xvda+volatile.img (VM)
HVM: root.img+volatile.img (dom0)
Remove artificial attribute '_start_guid_first' and use
guiagent_installed directly. This way starting guid for stubdom in debug
mode, even if guiagent_installed is set is much clearer.
This allows to assign PCI device to the VM, even if it doesn't support
proper reset. The default behaviour (when the value is True) is to not
allow such attachment (VM will not start if such device is assigned).
Require libvirt patch for this option.
Using QID for DispVM ID was a bad idea in terms of anonymity:
1. It gives some clue about VMs count in the system. In case of large
numbers, this can be quite unique.
2. If new DispVM is started just after closing previous one, it will get
the same ID, and in consequence the same IP. In case of using TorVM,
this leads to use the same circuit as just closed DispVM.
Fixesqubesos/qubes-issues#983
At least to have there info about its backup.
This was already done in commit
dc6fd3c8f3, but later was erroneously
reverted during migration to libvirt.
Fixesqubesos/qubes-issues#958
This can happen when initially there was no default netvm, some domain
was started, then default netvm was set and started - then
netvm.connected_vms will contain domains which aren't really connected
there.
Especially this was happening in firstboot.
Otherwise it would point at the same object and for example changing
vm.services[] in one VM will change that also for another. That link
will be severed after reloading the VMs from qubes.xml, but at least in
case of DispVM startup its too late - vm.service['qubes-dvm'] is set for
the DispVM template even during normal startup, not savefile preparation.
This allows to specify tight network isolation for a VM, and finally
close one remaining way for leaking traffic around TorVM. Now when VM is
connected to for example TorVM, its DispVMs will be also connected
there.
The new property can be set to:
- default (uses_default_dispvm_netvm=True) - use the same NetVM/ProxyVM as the
calling VM itself - including none it that's the case
- None - DispVMs will be network-isolated
- some NetVM/ProxyVM - will be used, even if calling VM is network-isolated
Closesqubesos/qubes-issues#862
Define it only when really needed:
- during VM creation - to generate UUID
- just before VM startup
As a consequence we must handle possible exception when accessing
vm.libvirt_domain. It would be a good idea to make this field private in
the future. It isn't possible for now because block_* are external for
QubesVm class.
This hopefully fixes race condition when Qubes Manager tries to access
libvirt_domain (using some QubesVm.*) at the same time as other tool is
removing the domain. Additionally if Qubes Manage would loose that race, it could
define the domain again leaving some unused libvirt domain (blocking
that domain name for future use).
Provide vm.refresh(), which will force to reconnect do QubesDB daemon,
and also get new libvirt object (including new ID, if any). Use this
method whenever QubesDB call returns DisconnectedError exception. Also
raise that exception when someone is trying to talk to not running
QubesDB - instead of returning None.
Libvirt will replace domain XML when trying to define the new one with
the same name and UUID - this is exactly what we need. This fixes race
condition with other processes (especially Qubes Manager), which can try
to access that libvirt domain object at the same time.
It have nothing to do with xenstore, so change the name to not mislead.
Also get rid of unused "xid" parameter - we should use XID as little as
possible, because it is not a simple task to keep it current.
It is used by just started DispVM to notice when restore process
completed. Alternatively it could watch its own domid, but lets do it in
Xen-independent way.