Add option to uniformly start new DispVM from either VM or Dom0. This
use DispVMWrapper, which translate it to either qrexec call to $dispvm,
or (in dom0) to appropriate Admin API call to create fresh DispVM
first.
This require abandoning registering --all and --exclude by
QubesArgumentParser, because we need to add --dispvm mutually exclusive
with those two. But actually handling those two options is still done by
QubesArgumentParser.
This also updates man page and tests.
FixesQubesOS/qubes-issues#2974
When data block is smaller than 4096 (and no EOF is reached), python's
io.read() will call read(2) again to get more data. This may deadlock if
the other end of connection will write anything only after receiveing
data (which is the case for qubes.Filecopy).
Disable this buffering by using syscall wrappers directly. To not affect
performance that much, increase buffer size to 64k.
FixesQubesOS/qubes-issues#2948
Launch stdin copy loop in a separate process (multiprocessing.Process)
and terminate it when target process is terminated.
Another idea here was threads, but there is no API to kill a thread
waiting on read().
Don't terminate qvm-run on any SIGCHLD, check if the process we're
waiting for have finished.
Currently the only situation when it's broken is a test (which starts
additional process, whose SIGCHLD may be caught here), but lets do not
assume that much (starting only one process) about environment.
It is supported only from dom0, but it's still useful to have, to save
on simultaneous vchan connections (only waiting for MSG_DATA_EXIT_CODE).
This is especially important for Windows VMs, as qrexec-agent there have
pretty low limit on simultaneous connections (about 20).
Make qvm-run use it.
Do not color qvm-run diagnostic messages, but also avoid ANSI control
sequences in logs. While at it, do not print 'Running ...' message when
--pass-io is used.