Currently:
* Create AppVM
* Remove AppVM
* Clone VM
* Start/Resume VM
* [...] VM
The first two are inconsistent. @bnvk and I agreed, that those should be changed AppVM -> VM for consistency.
And I add, if anything, it would have to be "Create TemplateBased-VM". Because currently, if you click "Create AppVM", you are asked in the next wizard if you wanted to create an AppVM, NetVM or ProxyVM. So the term AppVM is overloaded.
This commit fixes this.
xterm closes itself immediatelly when the specified command ends, so
wait for user reaction to give a chance to read the message (potentially
some error info). Also add some more meaningful window title.
QubesOS/qubes-issues#982
The "default NetVM" is usually the first created ProxyVM which is
set by qubes-core during its creation. [1] If there is no ProxyVM,
there is no "default NetVM". Therefore, creating an AppVM and
launching its settings dialog raised AttributeError, because
get_default_netvm method returned None.
This can be reproduced by installing QubesOS without creating VMs
by installer.
[1] https://github.com/QubesOS/qubes-core-admin/blob/master/core/qubes.py#L355Fixesqubesos/qubes-issues#1008
Qubes manager used different logic what it considers as "running VM",
than qubes core.
Here it was "running or starting/stopping", while qubes core uses the
same as libvirt (isActive()), which effectively means "not halted" -
which includes also "paused" and "suspended". This creates a lot
confusion in which action should be available when.
The actual detected bug was about resuming paused VM. There was assert
"not vm.is_running()", while the paused VM _is_ running in terms of
qubes core.
Fixesqubesos/qubes-issues#981
QubesVmCollection is not thread safe. If for example update_table() will
be called during some long-running task (like creating or removing VM),
it will try to reload qubes.xml (so get read lock first), but the thread
already holds a lock on this file. This would result in "Lock already
taken" exception.
Fixesqubesos/qubes-issues#986
QubesVm object caches some domain state (domain ID in libvirt object,
Qubes DB connection socket), which can become out of date in
case of start/stop events. Currently it needs manual trigger to refresh
itself.
This reverts commit 227597cf93.
QubesWatch no longer supports xenstore, so there is no simple way to
get this column updated. This is conscious decision in process of making
R3 Xen-independent.
Conflicts:
qubesmanager/main.py
Those changes will take effect after VM restart (at least for VM windows
borders), so to not confuse the user with partly updated colors, simply
block the change while the VM is running. The same applies to VM name.
Check init_mem and max_mem_size in a single function (merging the
previous two) taking into account the minimum init memory that allows
the requested maximum memory.
Explanation:
Linux kernel needs space for memory-related structures created at boot.
If init_mem is just 400MB, then max_mem can't balloon above 4.3GB (at
which poing it yields "add_memory() failed: -17" messages and apps
crash), regardless of the max_mem_size value.
Base of Marek's findings and my tests on a 16GB PC, using several
processes like:
stress -m 1 --vm-bytes 1g --vm-hang 100
result in the following points:
init_mem ==> actual max memory
400 4300
700 7554
800 8635
1024 11051
1200 12954
1300 14038
1500 14045 <== probably capped on my 16GB system
The actual ratio of max_mem_size/init_mem is surprisingly constant at
10.79
If less init memory is set than that ratio allows, then the set
max_mem_size is unreachable and the VM becomes unstable (app crashes)
Based on qubes-devel discussion titled "Qubes Dom0 init memory against
Xen best practices?" at:
https://groups.google.com/d/msg/qubes-devel/VRqkFj1IOtA/UgMgnwfxVSIJ