Pool.volumes property is implemented in a base class, individual drivers
should provide list_volumes() method as a backend for that property.
Fix this in a LinuxKernel pool.
Similar to admin.vm.volume.List. There are still other
admin.pool.volume.* methods missing, but lets start with just this one.
Those with both pool name and volume id arguments may need some more
thoughts.
Don't realy on a volume configuration only, it's easy to miss updating
it. Specifically, import_volume() function didn't updated the size based
on the source volume.
The size that the actual VM sees is based on the
file size, and so is the filesystem inside. Outdated size property can
lead to a data loss if the user perform an action based on a incorrect
assumption - like extending size, which actually will shrink the volume.
FixesQubesOS/qubes-issues#4821
Handle this case specifically, as way too many users ignore the message
during installation and complain it doesn't work later.
Name the problem explicitly, instead of pointing at libvirt error log.
FixesQubesOS/qubes-issues#4689
admin.pool.UsageDetails reports the usage data, unlike
admin.pool.Info, which should report the config/unchangeable data.
At the moment admin.Pool.Info still reports usage, to maintain
compatibility, but once all relevant tools are updated,
it should just return configuration data.
Added usage_details method to Pool class
(returns a dictionary with detailed information
on pool usage) and LVM implementation that returns
metadata info.
Needed for QubesOS/qubes-issues#5053
* origin/pr/287:
app: fix docstrings PEP8 refactor
tests: remove iptables_header content in test_622_qdb_keyboard_layout
tests: add test for guivm and keyboard_layout
gui: simplify setting guivm xid and keyboard layout
Make pylint happier
gui: set keyboard layout from feature
Handle GuiVM properties
Make PEP8 happier
Similar to domain-start-failed, add an event fired when
domain-pre-shutdown was fired but the actual operation failed.
Note it might not catch all the cases, as shutdown() may be called with
wait=False, which means it won't wait fot the actual shutdown. In that
case, timeout won't result in domain-shutdown-failed event.
QubesOS/qubes-issues#5380
Move this functionality from our custom runner (qubes.tests.run),
into base test class. This is very useful for correlating logs, so lets
have it with nose2 runner too.
Prevent starting a VM while it's being removed. Something could try to
start a VM just after it's being killer but before removing it (Whonix
example from previous commit is real-life case). The window specifically
is between kill() call and removing it from collection
(`del app.domains[vm.qid]`). Grab a startup_lock for the whole operation
to prevent it.
Check early (but after grabbing a startup_lock) if VM isn't just
removed. This could happen if someone grabs its reference from other
places (netvm of something else?) or just before removing it.
This commit makes the simple removal from the collection (done as the
first step in admin.vm.Remove implementation) efficient way to block
further VM startups, without introducing extra properties.
For this to be effective, removing from the collection, needs to happen
with the startup_lock held. Modify admin.vm.Remove accordingly.
There are cases when destination domain doesn't exist when the call gets
to qubesd. Namely:
1. The call comes from dom0, which bypasses qrexec policy
2. Domain was removed between checking the policy and here
Handle the the same way as if the domain wouldn't exist at policy
evaluation stage either - i.e. refuse the call.
On the client side it doesn't change much, but on the server call it
avoids ugly, useless tracebacks in system journal.
FixesQubesOS/qubes-issues#5105
Allow to manual inspect test environment after test fails. This is
similar to --do-not-clean option we had in R3.2.
The decorator should be used only while debugging and should never be
applied to the code committed into repository.
Domain shutdown handling may take extended amount of time, especially on
slow machine (all the LVM teardown etc). Take care of it by
synchronizing using vm.startup_lock, instead of increasing constant
delay. This way, the shutdown event handler needs to be started within
3s, not finish in this time.
When libvirt daemon is restarted, qubesd attempt to re-connect to the
new instance transparently (through virConnect object wrapper). But the
code lacked re-registering event handlers.
Fix this by adding reconnect callback argument to virConnectWrappper, to
be called after new connection is established. This callback will
additionally get old connection as an argument, if any cleanup is
needed. The old connection is closed just after callback returns.
Use this to re-register event handler, but also unregister old handler
first. While full unregister wont work on since old libvirt daemon
instance is dead already, it will still cleanup client structures.
Since the old libvirt connection is closed now, adjust also domain
reconnection logic, to handle stale connection object. In that case
isAlive() call throws an exception.
FixesQubesOS/qubes-issues#5303
Qubesd wrongly required default_template global property to be not None.
Furthermore, even without hard failure set, require_property method
raised an exception in case of a property having incorrect None value.
It now logs an error message instead, as designed.
fixesQubesOS/qubes-issues#5326
This fixes an invalid response generated by get_timezone when the time zones are composed by 3 parts, for example:
America/Argentina/Buenos_Aires
America/Indiana/Indianapolis
Update utils.py
Give raw cpu_time value, instead of normalized one (to number of vcpus),
as documented.
Move the normalization to cpu_usage calculation. At the same time, add
cpu_usage_raw without it, if anyone needs it.
QubesOS/qubes-issues#4531
* origin/pr/273:
tests: check importing empty data into ReflinkVolume
tests: check importing empty data into ThinVolume
tests: check importing empty data into FileVolume
tests: improve cleanup after LVM tests
During regular VM shutdown, the VM should sync() anyway. (And
admin.vm.volume.Import does fdatasync(), which is also fine.) But let's
be extra careful.
This is needed as a consequence of d8b6d3ef ("Make add_pool/remove_pool
coroutines, allow Pool.{setup,destroy} as coroutines"), but there hasn't
been any problem so far because no storage driver implemented pool
setup() as a coroutine.
Revert to use umount -l in storage tests cleanup. With fixed permissions
in 4234fe51 "tests: fix cleanup after reflink tests", it shouldn't cause
issues anymore, but apparently on some systems test cleanup fails
otherwise.
Reported by @rustybird
This reverts commit b6f77ebfa1.
There were (at least) five ways for the volume's nominal size and the
volume image file's actual size to desynchronize:
- loading a stale qubes.xml if a crash happened right after resizing the
image but before saving the updated qubes.xml (-> previously fixed)
- restarting a snap_on_start volume after resizing the volume or its
source volume (-> previously fixed)
- reverting to a differently sized revision
- importing a volume
- user tinkering with image files
Rather than trying to fix these one by one and hoping that there aren't
any others, override the volume size getter itself to always update from
the image file size. (If the getter is called though the storage API, it
takes the volume lock to avoid clobbering the nominal size when resize()
is running concurrently.)
And change the volume lock from an asyncio.Lock to a threading.Lock -
locking is now handled before coroutinization.
This will allow the coroutinized resize() and a new *not* coroutinized
size() getter from one of the next commits ("storage/reflink: preferably
get volume size from image size") to both run under the volume lock.
Successfully resize volumes without any currently existing image file,
e.g. cleanly stopped volatile volumes: Just update the nominal size in
this case.
Calling qrexec service dom0->dom0 can be useful when handling things
that can run in dom0 or other domain. This makes the interface uniform.
Example use cases include GUI VM and Audio VM.
Ask qubesd for admin.vm.Console call. This allows to intercept it with
admin-permission event. While at it, extract tty path extraction to
python, where libvirt domain object is already available.
FixesQubesOS/qubes-issues#5030
The initializer of the class DispVM first calls the initializer of the
QubesVM class, which among other things sets properties as specified in
kwargs, and then copies over the properties of the template. This can
lead to properties passed explicitly by the caller through kwargs being
overwritten.
Hence only clone properties of the template that are still set to
default in the DispVM.
FixesQubesOS/qubes-issues#4556
If Firefox is started for the first time, it will open both requested
page and its welcome page. This means closing the window will trigger a
confirmation about closing multiple tabs. Handle this.
Disk usage may change dynamically not only at VM start/stop. Refresh the
size cache before checking usage property, but no more than once every
30sec (refresh interval of disk space widget)
FixesQubesOS/qubes-issues#4888
If unmount is going to fail, let it do so explicitly, instead of hiding
the failure now, and observing it later at rmdir.
And if it fails, lets report what process is using that mount point.
Xenial environment has much newer GTK/Glib. For those test to run, few
more changes are needed:
- relevant GTK packages installed
- X server running (otherwise GTK terminate the process on module
import...)
- enable system side packages in virtualenv set by travis
Global properties should be loaded in stage 3, mark them as such.
Otherwise they are not loaded at all.
This applies to stats_interval and check_updates_vm. Others were
correct.
FixesQubesOS/qubes-issues#4856
If kernel package ships default-kernelopts-common.txt file, use that
instead of hardcoded Linux-specific options.
For Linux kernel it may include xen_scrub_pages=0 option, but only if
initrd shipped with this kernel re-enable this option later.
QubesOS/qubes-issues#4839QubesOS/qubes-issues#4736
Return meaningful value for kernels_dir if VM has no 'kernel' volume.
Right now it's mostly useful for tests, but could be also used for new
VM classes which doesn't have modules.img, but still use dom0-provided
kernel.
First of all, do not try to call those services in VMs not having qrexec
installed - for example Windows VMs without qubes tools.
Then, even if service call fails for any other reason, only log it but
do not prevent other services from being called. A single uncooperative
VM should generally be able only to hurt itself, not break other VMs
during suspend.
FixesQubesOS/qubes-issues#3489
Since we have more reliable domain-shutdown event delivery (it si
guaranteed to be delivered before subsequent domain start, even if
libvirt fails to report it), it's better to move detach_network call to
domain-shutdown handler. This way, frontend domain will see immediately
that the backend is gone. Technically it already know that, but at least
Linux do not propagate that anywhere, keeping the interface up,
seemingly operational, leading to various timeouts.
Additionally, by avoiding attach_network call _just_ after
detach_network call, it avoids various race conditions (like calling
cleanup scripts after new device got already connected).
While libvirt itself still doesn't cleanup devices when the backend
domain is gone, this will emulate it within qubesd.
FixesQubesOS/qubes-issues#3642FixesQubesOS/qubes-issues#1426
Pool setup/destroy may be a time consuming operation, allow them to be
asynchronous. Fortunately add_pool and remove_pool are used only through
Admin API, so the change does not require modification of other
components.
Boolean properties require specific setter to properly handle literal
"True" and "False" values. Previously it required all bool properties to
include 'setter=qubes.property.bool' in addition to 'type=bool'.
This fixes loading some boolean properties from qubes.xml. Specifically
at least include_in_backups on DispVM class lacked setter, which
resulted in property being reset to True automatically on qubesd
restart.
FixesQubesOS/qubes-issues#4831
If default-kernelopts-pci.txt is present, it will override default
built-in kernelopts for the VMs with PCI device assigned.
Similarly if default-kernelopts-nopci.txt is present, it will override
default kernelopts for VMs without PCI devices.
For template-based VMs, kernelopts of the template takes precedence over
default-kernelopts-nopci.txt but not default-kernelopts-pci.txt.
FixesQubesOS/qubes-issues#4839
If a specific DVM template is used for given DispVM, make new DispVMs
called from it use the same DVM template (unless explicitly overridden).
This prevent various isolation bypass cases, like using a chain of
DispVMs to access network.
Look for the first updateable template up in the template chain, instead
of going just one level up. Especially this applies to
DispVM -> AppVM -> TemplateVM case.
If DispVM reports available updates, 'updates-available'
flag should be set on relevant TemplateVM, not AppVM (*-dvm).
Include test for the new case.
FixesQubesOS/qubes-issues#3736
Instead of checking if domain is still running/paused, try to kill it
anyway and ignore appropriate exception. Otherwise domain could die
before the check and killing.
some-vm-root is a valid VM name, and in that case it's volume can be
named some-vm-root-private. Do not let it confuse revision listing,
check for unexpected '-' in volume revision number.
The proper solution would be to use different separator, that is not
allowed in VM names. But that would require migration code that is
undesired in the middle of stable release life cycle.
FixesQubesOS/qubes-issues#4680
* tests-20181223:
tests: drop expectedFailure from qubes_desktop_run test
tests: grub in HVM qubes
tests: update dom0_update for new updates available flag
tests: regression test LVM listing code
tests/extra: wrap ProcessWrapper.wait() to be asyncio-aware
tests: adjust backupcompat for new maxmem handling
This commit resolves a bug which causes strings such as "350MiB" to be
rejected by parse_size, due to the fact that parse_size changes the case
of letters in the input string ("350MiB") to uppercase ("350MIB"), but
fails to do the same for the elements of the units conversion table.
The correction is simple: Apply the same case change to the units
table elements before comparison.
The user of ExtraTestCase don't need to know anything about asyncio.
vm.run().wait() normally is a coroutine, but provide a wrapper that
handle asyncio.
This fixes FD leak in input proxy tests.
Since 4dc86310 "Use maxmem=0 to disable qmemman, add more automation to
it" meminfo-writer service is not accessible directly. maxmem property
is used to encode memory management instead.
- Two new methods: .features.check_with_adminvm() and
.check_with_template_and_adminvm(). Common code refactored.
- Two new AdminAPI calls to take advantage of the methods:
- admin.vm.feature.CheckWithAdminVM
- admin.vm.feature.CheckWithTemplateAndAdminVM
- Features manager moved to separate module in anticipation of features
on app object in R5.0. The attribute Features.vm renamed to
Features.subject.
- Documentation, tests.
* devel-20181205:
vm/dispvm: fix /qubes-vm-presistence qubesdb entry
vm/mix/net: prevent setting provides_network=false if qube is still used
tests: updates-available notification
tests/network: reduce code duplication
tests: listen on 'misc' socket too
First install test-pkg-1.0, then add test-pkg-1.1 to repo and check if
updates-available flag is set. Then install updates and check if the
flag is cleared.
QubesOS/qubes-issues#2009
The new property is meant for management stack (Salt) to set which DVM
template should be used to maintain given VM. Since the DispVM based on
it will be given ultimate control over target VM (qubes.VMShell
service), it should be trusted. The one pointed to by default_dispvm
not necessary is one.
The property defaults to the value from the template (if any), and then
to a global management_dispvm property. By default it is set to None.
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Migrate meminfo-writer=False service setting to maxmem=0 as a method to
disable dynamic memory management. Remove the service from vm.features
dict in the process.
Additionally, translate any attempt to set the service.meminfo-writer
feature to either setting maxmem=0 or resetting it to the default (which
is memory balancing enabled if supported by given domain). This is to at
least partially not break existing tools using service.meminfo-writer as
a way to control dynamic memory management. This code does _not_ support
reading service.meminfo-writer feature state to get the current state of
dynamic memory management, as it would require synchronizing with all
the factors affecting its value. One of main reasons for migrating to
maxmem=0 approach is to avoid the need of such synchronization.
QubesOS/qubes-issues#4480
Use maxmem=0 for disabling dynamic memory balance, instead of cryptic
service.meminfo-writer feature. Under the hood, meminfo-writer service
is also set based on maxmem property (directly in qubesdb, not
vm.features dict).
Having this as a property (not "feature"), allow to have sensible
handling of default value. Specifically, disable it automatically if
otherwise it would crash a VM. This is the case for:
- domain with PCI devices (PoD is not supported by Xen then)
- domain without balloon driver and/or meminfo-writer service
The check for the latter is heuristic (assume presence of 'qrexec' also
can indicate balloon driver support), but it is true for currently
supported systems.
This also allows more reliable control of libvirt config: do not set
memory != maxmem, unless qmemman is enabled.
memory != maxmem only makes sense if qmemman for given domain is
enabled. Besides wasting some domain resources for extra page tables
etc, for HVM domains this is harmful, because maxmem-memory difference
is made of Popupate-on-Demand pool, which - when depleted - will kill
the domain. This means domain without balloon driver will die as soon
as will try to use more than initial memory - but without balloon driver
it sees maxmem memory and doesn't know about the lower limit.
FixesQubesOS/qubes-issues#4135
It makes a lot of sense to call long-running operations in that event
handler, including calling back into the VM. Allow that by using
fire_event_async, not just fire_event.
Also, document the event.
Commit 15cf593bc5 "tests/lvm: fix checking
lvm pool existence" attempted to fix handling '-' in pool name by using
/dev/VG/LV symlink. But those are not created for thin pools. Change
back to /dev/mapper, but include '-' mangling.
Related QubesOS/qubes-issues#4332
Restore old code for calculating subdir within the archive. The new one
had two problems:
- set '/' for empty input subdir - which caused qubes.xml.000 to be
named '/qubes.xml.000' (and then converted to '../../qubes.xml.000');
among other things, this results in the wrong path used for encryption
passphrase
- resolved symlinks, which breaks calculating path for any symlinks
within VM's directory (symlinks there should be treated as normal files
to be sure that actual content is included in the backup)
This partially reverts 4e49b951ce.
FixesQubesOS/qubes-issues#4493
vm.kill() will try to get vm.startup_lock, so it can't be called while
holding it already.
Fix this by extracting vm._kill_locked(), which expect the lock to be
already taken by the caller.
* origin/pr/239:
storage: fix NotImplementedError message for import_data()
storage/reflink: make resize()/import_volume() more readable
storage/reflink: unblock import_data() and import_data_end()
Try to collect more details about why the test failed. This will help
only if qvm-open-in-dvm exist early. On the other hand, if it hang, or
remote side fails to find the right editor (which results in GUI error
message), this change will not provide any more details.
First boot of whonix-ws based VM take extended period of time, because
a lot of files needs to be copied to private volume. This takes even
more time, when verbose logging through console is enabled. Extend the
timeout for that.