This way it gives more control over time synchronization to the VM. For
example Whonix VMs can decide to not use this mechanism. Also VM can
choose how that time will be set (chronyc call?). And finally it will be
possible to implement the same for other OS-es (Windows).
Additionally because of calling date as "localcmd" each time, instead of
once at the beginning, time synchronization is more accurrate now. If
some VM stall the time set call, other VMs time will no longer be
affected (but still synchronization will be delayed).
This allows to assign PCI device to the VM, even if it doesn't support
proper reset. The default behaviour (when the value is True) is to not
allow such attachment (VM will not start if such device is assigned).
Require libvirt patch for this option.
When this option is used, the user probably already got that message.
Also some internal scripts are using this (for example template
pre-uninstall script).
Conflicts:
qvm-tools/qvm-remove
In some cases qvm-sync-clock can take a long time (for example in case
of network problems, or when some do not responds). This can lead to
multiple qvm-sync-clock hanging for the same reason (blocking vchan
resources). To prevent that create a lock file and simply abort when one
instance is already running.
This allows to specify tight network isolation for a VM, and finally
close one remaining way for leaking traffic around TorVM. Now when VM is
connected to for example TorVM, its DispVMs will be also connected
there.
The new property can be set to:
- default (uses_default_dispvm_netvm=True) - use the same NetVM/ProxyVM as the
calling VM itself - including none it that's the case
- None - DispVMs will be network-isolated
- some NetVM/ProxyVM - will be used, even if calling VM is network-isolated
Closesqubesos/qubes-issues#862
It have nothing to do with xenstore, so change the name to not mislead.
Also get rid of unused "xid" parameter - we should use XID as little as
possible, because it is not a simple task to keep it current.
Long time ago passio=True was used to replace current process with
qrexec-client directly (qvm-run --pass-io was the called), but this
behaviour is not used anymore (qvm-run was the only user). And this
option was left untouched, with misleading name - one would assume that
using passio=False should disallow any I/O, but this isn't the case.
Especially qvm-sync-clock is calling clockvm.run('...', wait=True),
default value for passio=False. This causes to output data from
untrusted VM, without sanitising terminal sequences, which can be fatal.
This patch changes passio semantic to actually do what it means - when
set to True - VM process will be able to interact with
stdin/stdout/stderr. But when set to False, all those FDs will be
connected to /dev/null.
Conflicts:
core-modules/000QubesVm.py
- script redesign,
- fixed VT-d, VT-x detection,
- Support File generation is optional,
- the results are kept in dom0 by default,
- version and usage info added.
(cherry picked from commit f5845b2df1db19da37f02ace24f29a82660c39ff)
This makes easier to import right objects in submodules (only one
object). This also implement lazy connection - at first access, not at
module import, which speeds up tools, which doesn't need runtime
information (like qvm-prefs or qvm-service). In the future this will
ease migration from xenstore to QubesDB.
Also implement "offline mode" - operate on qubes.xml without connecting
to VMM - raise exception at such try.
This is needed to run tools during installation, where only minimal
set of services are started, especially no libvirt.
When system is going down, systemd kills all the users processes,
including 'xl' daemons waiting for domain shutdown. This results in
zombie domains not cleaned up. The proper fix would be somehow extract
those processes from user session scope (most likely by starting them as
a service).
But because it applies only to system shutdown (qvm-shutdown
call there), it is simpler to add appropriate handling code to
qvm-shutdown.
In R3 the problem will vanish, because of use libvirtd deamon, so no
user processes required to track domains state.