QubesException class is used with meaningful messages and should be ok
to use it directly as error message. For other exceptions, still use
full traceback (most likely a bug somewhere, not user error).
FixesQubesOS/qubes-issues#3610
Don't print None value as 'None' string, but as empty one (same as at
API level). Otherwise it is indistinguishable from VM named 'None', or
same string property value.
This is especially important on LVM thin pool, where space after
removing the file needs to be given back to the pool, to be reused for
other volumes (for example this template).
qvm-start-gui lifecycle should be bound to X server lifecycle. It should
be restarted when user logoff and login again, at least to start
gui-daemons again.
Do that by opening a connection to X server and reacting to breaking
that socket.
FixesQubesOS/qubes-issues#3147
I don't know if any template currently hits this code path, even the
fedora-26-minimal root.img is large enough to be split into multiple
parts. Maybe Arch Linux?
Related to https://github.com/QubesOS/qubes-core-admin/pull/188
'qvm-run --dispvm' cannot easily make a separate qubes.WaitForSession
call. Instead, if --gui is active, pass the new WaitForSession argument
to qubes.VMShell, which will do the equivalent.
The unit tests have been copied (in slightly adapted form) from commit
a620f02e2aFixesQubesOS/qubes-issues#3012ClosesQubesOS/qubes-core-admin-client#49
In core-admin matching collections are real dicts, so clone this API
behaviour here too. Specific changes:
- iteration yields keys, not values
- implement values and items methods
Additionally fix keys method, it was broken on python2 (list have no
copy method).
It wasn't possible to use QubesArgumentParser(vmname_nargs=...) for
optional domain list - the option forced usage of either --all or
explicit domain list.
When starting a VM with --cdrom=some-vm:/some/path/to.iso, it can be
started only when loop device matching the path is available. For now,
add naive waiting (while ... sleep(1)) for it. Later it might worth
converting it to events handling.
* qubesos/pr/44:
Fix style else-return
tests: update qvm-template-process and qvm-remove tests
Add --force to manpage.
Avoid cloning installed_by_rpm
Print vm list before prompt
Use --force instead of --yes
Toggle installed_by_rpm in template tool
Fix error message grammar
Add --yes option and confirm prompt.
The main process sometimes sets fd 1 to O_NONBLOCK, and since in the
terminal case fd 0 and 1 are the same fd, this also results in fd 0
being non-blocking, causing qvm-run to crash with EAGAIN.
So just make the code work for both blocking and non-blocking stdin.
When some VM timeout on shutdown, the tool will try to kill all of them,
but at this point some of them may be already powered off (not all
hanged during shutdown, but only some). Handle this
situation instead of crashing. And add appropriate test.
1. Output of `losetup` command contains `\n` - strip it.
2. Provide read-only option - if device info hasn't propagated to qubesd
yet, it will not be set automatically.
FixesQubesOS/qubes-issues#3146
Very few calls at client side really needs VM class name. So, even in
non-blind mode use just QubesVM class, to avoid strange cases depending
on blind mode being enabled or not. Then, have VM class name in 'klass'
property. If known at object creation time, cache it, otherwise query
qubesd at first access.
Booting a VM from cdrom require attaching the device before VM startup,
which is possible only in persistent mode. But for qvm-start --cdrom
adding a cdrom only temporarily, use new update_persistence() function
to convert the assignment to temporary one.
FixesQubesOS/qubes-issues#3055
This may be confiusing, for example one may think that
`qvm-prefs --unset vmname netvm` will make vmname network-disconnected.
This type of mistakes may have severe security consequence, so better
drop those option names.
QubesOS/qubes-issues#3002
cc @rootkovska
Add option to uniformly start new DispVM from either VM or Dom0. This
use DispVMWrapper, which translate it to either qrexec call to $dispvm,
or (in dom0) to appropriate Admin API call to create fresh DispVM
first.
This require abandoning registering --all and --exclude by
QubesArgumentParser, because we need to add --dispvm mutually exclusive
with those two. But actually handling those two options is still done by
QubesArgumentParser.
This also updates man page and tests.
FixesQubesOS/qubes-issues#2974
When data block is smaller than 4096 (and no EOF is reached), python's
io.read() will call read(2) again to get more data. This may deadlock if
the other end of connection will write anything only after receiveing
data (which is the case for qubes.Filecopy).
Disable this buffering by using syscall wrappers directly. To not affect
performance that much, increase buffer size to 64k.
FixesQubesOS/qubes-issues#2948
* devel-6:
qvm-ls: fix total VM size reporting
doc: update manpage of qvm-service
tools: qvm-service tool
tests: too much copy&paste
features: serialize True as '1'
tools/qvm-start-gui: add --force-stubdomain options
tools/qvm-shutdown: fix help message
tools/qvm-shutdown: drop --force option, it isn't supported anymore
New qvm-backup tool can either use pre-existing backup profile
(--profile), or - when running in dom0 - can create new one based on
used options (--save-profile).
This commit add a tool itself, update its man page, and add tests for
it.
FixesQubesOS/qubes-issues#2931
* devel-2-qvm-run-1:
Make pylint happy
tools/qvm-run: fix handling EOF
tests: mark qvm-run tests with "expected failure"
tools/qvm-run: fix handling copying stdin to the process
Launch stdin copy loop in a separate process (multiprocessing.Process)
and terminate it when target process is terminated.
Another idea here was threads, but there is no API to kill a thread
waiting on read().
VMs can have runtime dependencies - for example it isn't possible to
shutdown netvm used by some other running VM(s). Since client-side tools
may not have full knowledge about rules enforcing those dependencies
(for example may not have access to 'netvm' property), implement
best-effort approach:
1. Try to shutdown all requested VMs
2. For those where shutdown request succeed, wait for actual shutdown
3. For others - go back to step 1
And loop until all VMs are shutdown, or all shutdown requests fails.
Do not close event loop in utility function - handle it only in main().
For this reason, change appropriate functions to coroutines.
FixesQubesOS/qubes-issues#2865
Don't fail the whole process when "just" appmenus import fails.
But if data import fails, remove the VM
Also update for vm.run_service_for_stdio raising CalledProcessError.
Don't start GUI daemon for given VM twice when qvm-start-gui was started
during VM startup (while waiting for qrexec startup). This is especially
common while running tests.
Report failed qubes.SetMonitorLayout as warning (instead of unhandled
exception).
Clear VM cache on qubesd reconnect.
Fix logging.
This way we don't need separate admin.vm.Clone call, which is tricky to
handler properly with policy.
A VM may not have access to all the properties and other metadata, so
add ignore_errors argument, for best-effort approach (copy what is
possible). In any case, failure of cloning VM data fails the whole
operation.
When operation fails, VM is removed.
While at it, allow to specify alternative VM class - this allows
morphing one VM into another (for example AppVM -> StandaloneVM).
Adjust qvm-clone tool and tests accordingly.
QubesOS/qubes-issues#2622
Enable 'qrexec' VM feature to wait for qrexec initialization - it is
required to call qubes.PostInstall service. If VM start fails, assume
there is no qrexec and drop that feature.
Since Admin API, qvm-ls takes a long time to complete. Therefore,
Corporate Headquarters commanded that a Enterprise Spinner is to be
implemented and mandated it's use unto us.
We take amusement from its endless gyrations.
Since migration to Admin API, qvm-ls no longer have list of all VM
properties in advance, so can't really validate fields list. Simply
assume that unknown columns are properties.
Don't terminate qvm-run on any SIGCHLD, check if the process we're
waiting for have finished.
Currently the only situation when it's broken is a test (which starts
additional process, whose SIGCHLD may be caught here), but lets do not
assume that much (starting only one process) about environment.
It is supported only from dom0, but it's still useful to have, to save
on simultaneous vchan connections (only waiting for MSG_DATA_EXIT_CODE).
This is especially important for Windows VMs, as qrexec-agent there have
pretty low limit on simultaneous connections (about 20).
Make qvm-run use it.
Do not color qvm-run diagnostic messages, but also avoid ANSI control
sequences in logs. While at it, do not print 'Running ...' message when
--pass-io is used.