qubes.VMShell service, used by qvm-run, expects the command on the first
input line. Previously, when --localcmd was used, the command wasn't
written anywhere and the local command was connected directly to
qubes.VMShell service. And the first line of its output was interpreted
as a command.
Fix this by starting the local command separately, after sending the
command to qubes.VMShell service.
While at it, unify handling shell command and service calls in the process.
vm.run_service(..., localcmd= ) isn't that useful in general case,
because for qubes.VMShell the caller first need to send the command
before starting local process. Since the qvm-run tool needs to implement
manual starting localcmd anyway, don't use localcmd= run_service's
argument at all to unify calling methods.
There is slight behavior change: previously localcmd was started only
after establishing service connection (for example only if qrexec policy
allows), now it is started in all the cases.
FixesQubesOS/qubes-issues#4040
'qvm-run --dispvm' cannot easily make a separate qubes.WaitForSession
call. Instead, if --gui is active, pass the new WaitForSession argument
to qubes.VMShell, which will do the equivalent.
The unit tests have been copied (in slightly adapted form) from commit
a620f02e2aFixesQubesOS/qubes-issues#3012ClosesQubesOS/qubes-core-admin-client#49
The main process sometimes sets fd 1 to O_NONBLOCK, and since in the
terminal case fd 0 and 1 are the same fd, this also results in fd 0
being non-blocking, causing qvm-run to crash with EAGAIN.
So just make the code work for both blocking and non-blocking stdin.
Add option to uniformly start new DispVM from either VM or Dom0. This
use DispVMWrapper, which translate it to either qrexec call to $dispvm,
or (in dom0) to appropriate Admin API call to create fresh DispVM
first.
This require abandoning registering --all and --exclude by
QubesArgumentParser, because we need to add --dispvm mutually exclusive
with those two. But actually handling those two options is still done by
QubesArgumentParser.
This also updates man page and tests.
FixesQubesOS/qubes-issues#2974
When data block is smaller than 4096 (and no EOF is reached), python's
io.read() will call read(2) again to get more data. This may deadlock if
the other end of connection will write anything only after receiveing
data (which is the case for qubes.Filecopy).
Disable this buffering by using syscall wrappers directly. To not affect
performance that much, increase buffer size to 64k.
FixesQubesOS/qubes-issues#2948
Launch stdin copy loop in a separate process (multiprocessing.Process)
and terminate it when target process is terminated.
Another idea here was threads, but there is no API to kill a thread
waiting on read().
Don't terminate qvm-run on any SIGCHLD, check if the process we're
waiting for have finished.
Currently the only situation when it's broken is a test (which starts
additional process, whose SIGCHLD may be caught here), but lets do not
assume that much (starting only one process) about environment.
It is supported only from dom0, but it's still useful to have, to save
on simultaneous vchan connections (only waiting for MSG_DATA_EXIT_CODE).
This is especially important for Windows VMs, as qrexec-agent there have
pretty low limit on simultaneous connections (about 20).
Make qvm-run use it.
Do not color qvm-run diagnostic messages, but also avoid ANSI control
sequences in logs. While at it, do not print 'Running ...' message when
--pass-io is used.