Load integration tests from outside of core-admin repository, through
entry points.
Create wrapper for VM object to keep very basic compatibility with tests
written for core2. This means if test use only basic functionality
(vm.start(), vm.run()), the same test will work for both core2 and
core3. This is especially important for app-* repositories, where the
same version serves multiple Qubes branches.
This also hides asyncio usage from tests writer.
See QubesOS/qubes-issues#1800 for details on original feature.
Test base functions of dom0 module (creating VM, setting property) and
configuring system inside of VM (through DispVM). The later is done for
each available template (the process use salt installed in that
template, not copied from dom0).
QubesOS/qubes-issues#3316
When dom0 do not provide the kernel, it should also not set kernel
command line in libvirt config. Otherwise qemu in stubdom fails to start
because it get -append option without -kernel, which is illegal
configuration.
FixesQubesOS/qubes-issues#3339
There may be cases when VM providing the network to other VMs is started
later - for example VM restart. While this is rare case (and currently
broken because of QubesOS/qubes-issues#1426), do not assume it will
always be the case.
Add property for IPv6 address ('ip6'). Build default value similarly to
IPv4 - common prefix + QID or Disp ID (for DispVMs).
This all is disabled unless 'ipv6' feature is enabled. It is inherited
from netvm (not template).
Even when enabled, VM may decide to not use it - or simply not support
it.
QubesOS/qubes-issues#718
Allow using default feature value from netvm, not template. This makes
sense for network-related features like using tor, supporting ipv6 etc.
Similarly to check_with_template, expose it also on Admin API.
Having both default_netvm and default_fw_netvm cause a lot of confusion,
because it isn't clear for the user which one is used when. Additionally
changing provides_network property may also change netvm property, which
may be unintended effect. This as a whole make it hard to:
- cover all netvm-changing actions with policy for Admin API
- cover all netvm-changing events (for example to apply the change to
the running VM, or to check for netvm loops)
As suggested by @qubesuser, kill the default_fw_netvm property and
simplify the logic around it.
Since we're past rc1, implement also migration logic. And add tests for
said migration.
FixesQubesOS/qubes-issues#3247
* qubesos/pr/166:
create "lvm" pool using rootfs thin pool instead of hardcoding qubes_dom0-pool00
change default pool code to be fast
cache PropertyHolder.property_list and use O(1) property name lookups
remove unused netid code
cache isinstance(default, collections.Callable)
don't access netvm if it's None in visible_gateway/netmask
There were many cases were the check was missing:
- changing default_netvm
- resetting netvm to default value
- loading already broken qubes.xml
Since it was possible to create broken qubes.xml using legal calls, do
not reject loading such file, instead break the loop(s) by setting netvm
to None when loop is detected. This will be also useful if still not all
places are covered...
Place the check in default_netvm setter. Skip it during qubes.xml loading
(when events_enabled=False), but still keep it in setter, to _validate_ the
value before any property-* event got fired.
If HVM have PCI device, it can't use PoD, so need 'maxmem' memory to be
started. Request that much from qmemman.
Note that is is somehow independent of enabling or not dynamic memory
management for the VM (`service.meminfo-writer` feature). Even if VM
initially had assigned maxmem memory, it can be later ballooned down.
QubesOS/qubes-issues#3207
This qrexec is meant for services, which require some kind of
"registering" before use. After registering, the backend should invoke
this call with frontend as the intended destination, with the actual
service in argument of this call and the argument as the payload.
By default this qrexec is disabled by policy.
Signed-off-by: Wojtek Porczyk <woju@invisiblethingslab.com>
Not sure how this ever worked before, if it did.
The device nodes pointed to by /dev/qubes_dom0/* are owned by
root:disk with perms 660, qubes user is not in disk group,
and service is invoked as qubes user, not root.
* 20171107-storage:
api/admin: add API for changing revisions_to_keep dynamically
storage/file: move revisions_to_keep restrictions to property setter
api/admin: hide dd statistics in admin.vm.volume.Import call
storage/lvm: fix importing different-sized volume from another pool
storage/file: fix preserving spareness on volume clone
api/admin: add pool size and usage to admin.pool.Info response
storage: add size and usage properties to pool object
* 20171107-tests-backup-api-misc:
test: make race condition on xterm close less likely
tests/backupcompatibility: fix handling 'internal' property
backup: fix handling target write error (like no disk space)
tests/backupcompatibility: drop R1 format tests
backup: use offline_mode for backup collection
qubespolicy: fix handling '$adminvm' target with ask action
app: drop reference to libvirt object after undefining it
vm: always log startup fail
api: do not log handled errors sent to a client
tests/backups: convert to new restore handling - using qubesadmin module
app: clarify error message on failed domain remove (used somewhere)
Fix qubes-core.service ordering