* qubesos/pr/44:
Fix style else-return
tests: update qvm-template-process and qvm-remove tests
Add --force to manpage.
Avoid cloning installed_by_rpm
Print vm list before prompt
Use --force instead of --yes
Toggle installed_by_rpm in template tool
Fix error message grammar
Add --yes option and confirm prompt.
Make use of better security of Qubes 4.x by using HVM by default. If
some VMs are incompatible with it (like MirageOS based), user can always
switch it to PV manually later.
This restores Qubes R3.2 behavior
Before this patch, the following:
qvm-run -p sys-firewall 'echo -e "\e[0;46mcyan!" >&2' | wc -l
leaks the escape sequences through to the dom0 terminal via stderr,
in this case demonstrated by the ability to change the text color while
it should be fixed to red.
This can also be abused with xterm reporting sequences to cause input
to be sent to the dom0 terminal. This is potentially a security issue.
The main process sometimes sets fd 1 to O_NONBLOCK, and since in the
terminal case fd 0 and 1 are the same fd, this also results in fd 0
being non-blocking, causing qvm-run to crash with EAGAIN.
So just make the code work for both blocking and non-blocking stdin.
When some VM timeout on shutdown, the tool will try to kill all of them,
but at this point some of them may be already powered off (not all
hanged during shutdown, but only some). Handle this
situation instead of crashing. And add appropriate test.
Do not crash if the other object is completely different type. Return
False ("unequal") instead.
This crashed preparing list of devices in qubes-vm-boot-from-device.
FixesQubesOS/qubes-issues#3182
If EOF is reached on tar's stderr, stop reading it, even if didn't found
expected data. Log this event.
This may happen when tar output some fatal error, instead of filelist.
os.path.splitext fails on path without proper file base name, like
'/something/..000'. Use plain string methods (rsplit).
FixesQubesOS/qubes-issues#3167
Packages had missing dependency on python-dbus. Since DBusHandler isn't
used anywhere, drop it, instead of introducing more dependencies.
Reported by @pietrushnic
QubesOS/qubes-issues#3179
vm.get_power_state() have specifically documented 'NA' state for cases
when it's unable to get VM's power state. Use this when qrexec policy
forbid checking it.
Reported by @pietrushnic
FixesQubesOS/qubes-issues#3179
1. Output of `losetup` command contains `\n` - strip it.
2. Provide read-only option - if device info hasn't propagated to qubesd
yet, it will not be set automatically.
FixesQubesOS/qubes-issues#3146
First, TemplateVM is not used anymore (see previous commit). Second,
don't harcode on client side that "only TemplateVM can be a template for
any VM" (which actually isn't true: AppVM can be a template for DispVM).
Very few calls at client side really needs VM class name. So, even in
non-blind mode use just QubesVM class, to avoid strange cases depending
on blind mode being enabled or not. Then, have VM class name in 'klass'
property. If known at object creation time, cache it, otherwise query
qubesd at first access.
It may happen that when client handle the event, domain no longer
exists. This is for example common for DispVMs, which get removed just
after shutdown.
This will cause some events to be dropped, but one can enable blind
mode, to get them anyway (because it will not cause KeyError, even if
domain is already removed).
QubesOS/qubes-issues#3100
This allows to perform actions on objects (VM, storage etc), without
listing them. This is useful when calling VM have minimal permissions
and only selected actions are allowed.
This means that app.domains['some-name'] will not raise KeyError, even
when domain do not exists. But performing actual action (like
vm.start()) will fail in that case.
Booting a VM from cdrom require attaching the device before VM startup,
which is possible only in persistent mode. But for qvm-start --cdrom
adding a cdrom only temporarily, use new update_persistence() function
to convert the assignment to temporary one.
FixesQubesOS/qubes-issues#3055
Abort tar process after extracting requested files - do not parse the
archive until the end (possibly tens of GB later).
FixesQubesOS/qubes-issues#2986
This may be confiusing, for example one may think that
`qvm-prefs --unset vmname netvm` will make vmname network-disconnected.
This type of mistakes may have severe security consequence, so better
drop those option names.
QubesOS/qubes-issues#3002
cc @rootkovska
Add option to uniformly start new DispVM from either VM or Dom0. This
use DispVMWrapper, which translate it to either qrexec call to $dispvm,
or (in dom0) to appropriate Admin API call to create fresh DispVM
first.
This require abandoning registering --all and --exclude by
QubesArgumentParser, because we need to add --dispvm mutually exclusive
with those two. But actually handling those two options is still done by
QubesArgumentParser.
This also updates man page and tests.
FixesQubesOS/qubes-issues#2974
This is a wrapper to use `$dispvm` target of qrexec call, just like any
other service call in qubesadmin module - using vm.run_service().
When running in dom0, qrexec-client-vm is not available, so DispVM needs
to be created "manually", using appropriate Admin API call
(admin.vm.CreateDisposable).
QubesOS/qubes-issues#2974
When data block is smaller than 4096 (and no EOF is reached), python's
io.read() will call read(2) again to get more data. This may deadlock if
the other end of connection will write anything only after receiveing
data (which is the case for qubes.Filecopy).
Disable this buffering by using syscall wrappers directly. To not affect
performance that much, increase buffer size to 64k.
FixesQubesOS/qubes-issues#2948
* devel-6:
qvm-ls: fix total VM size reporting
doc: update manpage of qvm-service
tools: qvm-service tool
tests: too much copy&paste
features: serialize True as '1'
tools/qvm-start-gui: add --force-stubdomain options
tools/qvm-shutdown: fix help message
tools/qvm-shutdown: drop --force option, it isn't supported anymore
New qvm-backup tool can either use pre-existing backup profile
(--profile), or - when running in dom0 - can create new one based on
used options (--save-profile).
This commit add a tool itself, update its man page, and add tests for
it.
FixesQubesOS/qubes-issues#2931
Do not check for qubesd socket (at module import time), because if not
running at this precise time, it will lead to wrong choice. And a weird
error message in consequence (looking for qrexec-client-vm in dom0).
FixesQubesOS/qubes-issues#2917
* devel-2-qvm-run-1:
Make pylint happy
tools/qvm-run: fix handling EOF
tests: mark qvm-run tests with "expected failure"
tools/qvm-run: fix handling copying stdin to the process
since qvm-run use multiprocessing.Process now, stdin sent to it is
processed in separate process and doesn't come back to
TestApp.actual_calls (self.app). Annotate tests for now, to be fixed
later.
The most important part is fixing resize handling - call size_func
before data_func, but after tar gets initial data (and output file
size).
But other than that, it makes the process a little faster.
QubesOS/qubes-issues#1214
Opt for a simple one-liner error messages, instead of meaningless stack
trace (it's most of the time about qubesd responding with error, so the
stack trace of actual problem is elsewhere).
This will be less realistic (private.img of 2MB?!), but makes tests much
quicker. And since tar is used to make files sparse, we don't really
test multi-part archives anyway.
Based on "backup compatibility" tests, which manually assemble the
backup. This is because we don't have access to actual backup creation
code here.
QubesOS/qubes-issues#1214
Launch stdin copy loop in a separate process (multiprocessing.Process)
and terminate it when target process is terminated.
Another idea here was threads, but there is no API to kill a thread
waiting on read().
VMs can have runtime dependencies - for example it isn't possible to
shutdown netvm used by some other running VM(s). Since client-side tools
may not have full knowledge about rules enforcing those dependencies
(for example may not have access to 'netvm' property), implement
best-effort approach:
1. Try to shutdown all requested VMs
2. For those where shutdown request succeed, wait for actual shutdown
3. For others - go back to step 1
And loop until all VMs are shutdown, or all shutdown requests fails.