Display cpu and mem similar to qvm-ls but ordered by cpu time. Also a
one line summary switch which includes the top n cpu consuming vms and
total memory consumption. Intended usage is to e.g. embed in a window
manager widget.
qhost.measure_cpu_usage expects the qvm_collection as parameter. Also
the number of vcpus of dom0 seems to be 0, leading to a div by 0. A more
complete fix would probably involve e.g. a new num_cores property which
would contain number of vcpu for vhosts and number of actual cores for
dom0.
For now this is a partial solution.
The following list is bollocks. There were many, many more.
Conflicts:
core-modules/003QubesTemplateVm.py
core-modules/005QubesNetVm.py
core/qubes.py
core/storage/__init__.py
core/storage/xen.py
doc/qvm-tools/qvm-pci.rst
doc/qvm-tools/qvm-prefs.rst
qubes/tools/qmemmand.py
qvm-tools/qvm-create
qvm-tools/qvm-prefs
qvm-tools/qvm-start
tests/__init__.py
vm-config/xen-vm-template-hvm.xml
This commit took 2 days (26-27.01.2016) and put our friendship to test.
--Wojtek and Marek
In some (most) cases VM needs to be started to complete resize
operation. This may be unexpected, so make it clear and do not start the
VM when the user did not explicitly allow that.
FixesQubesOS/qubes-issues#1268
Forgetting this leads to misterious errors (VM started with different
kernel than it was just set), so simplify the API.
FixesQubesOS/qubes-issues#1400
This should be done in QubesVm class (property setter). But since we're
going to rewrite both qvm-prefs and QubesVm, settle on smaller change
for now.
QubesOS/qubes-issues#1184
This should be done in QubesVm class (property setter). But since we're
going to rewrite both qvm-prefs and QubesVm, settle on smaller change
for now.
FixesQubesOS/qubes-issues#1273
This is required to create VMs in process of building Live system, where
libvirt isn't running.
Additionally there is no udev in the build environment, so needs to
manually create /dev/loop*p* based on sysfs info.
This way it gives more control over time synchronization to the VM. For
example Whonix VMs can decide to not use this mechanism. Also VM can
choose how that time will be set (chronyc call?). And finally it will be
possible to implement the same for other OS-es (Windows).
Additionally because of calling date as "localcmd" each time, instead of
once at the beginning, time synchronization is more accurrate now. If
some VM stall the time set call, other VMs time will no longer be
affected (but still synchronization will be delayed).
This allows to assign PCI device to the VM, even if it doesn't support
proper reset. The default behaviour (when the value is True) is to not
allow such attachment (VM will not start if such device is assigned).
Require libvirt patch for this option.
When this option is used, the user probably already got that message.
Also some internal scripts are using this (for example template
pre-uninstall script).
Conflicts:
qvm-tools/qvm-remove
In some cases qvm-sync-clock can take a long time (for example in case
of network problems, or when some do not responds). This can lead to
multiple qvm-sync-clock hanging for the same reason (blocking vchan
resources). To prevent that create a lock file and simply abort when one
instance is already running.
This allows to specify tight network isolation for a VM, and finally
close one remaining way for leaking traffic around TorVM. Now when VM is
connected to for example TorVM, its DispVMs will be also connected
there.
The new property can be set to:
- default (uses_default_dispvm_netvm=True) - use the same NetVM/ProxyVM as the
calling VM itself - including none it that's the case
- None - DispVMs will be network-isolated
- some NetVM/ProxyVM - will be used, even if calling VM is network-isolated
Closesqubesos/qubes-issues#862
It have nothing to do with xenstore, so change the name to not mislead.
Also get rid of unused "xid" parameter - we should use XID as little as
possible, because it is not a simple task to keep it current.
Long time ago passio=True was used to replace current process with
qrexec-client directly (qvm-run --pass-io was the called), but this
behaviour is not used anymore (qvm-run was the only user). And this
option was left untouched, with misleading name - one would assume that
using passio=False should disallow any I/O, but this isn't the case.
Especially qvm-sync-clock is calling clockvm.run('...', wait=True),
default value for passio=False. This causes to output data from
untrusted VM, without sanitising terminal sequences, which can be fatal.
This patch changes passio semantic to actually do what it means - when
set to True - VM process will be able to interact with
stdin/stdout/stderr. But when set to False, all those FDs will be
connected to /dev/null.
Conflicts:
core-modules/000QubesVm.py
- script redesign,
- fixed VT-d, VT-x detection,
- Support File generation is optional,
- the results are kept in dom0 by default,
- version and usage info added.
(cherry picked from commit f5845b2df1db19da37f02ace24f29a82660c39ff)
This makes easier to import right objects in submodules (only one
object). This also implement lazy connection - at first access, not at
module import, which speeds up tools, which doesn't need runtime
information (like qvm-prefs or qvm-service). In the future this will
ease migration from xenstore to QubesDB.
Also implement "offline mode" - operate on qubes.xml without connecting
to VMM - raise exception at such try.
This is needed to run tools during installation, where only minimal
set of services are started, especially no libvirt.
When system is going down, systemd kills all the users processes,
including 'xl' daemons waiting for domain shutdown. This results in
zombie domains not cleaned up. The proper fix would be somehow extract
those processes from user session scope (most likely by starting them as
a service).
But because it applies only to system shutdown (qvm-shutdown
call there), it is simpler to add appropriate handling code to
qvm-shutdown.
In R3 the problem will vanish, because of use libvirtd deamon, so no
user processes required to track domains state.