QubesDB daemon no longer remove socket created by new instance, so one
part of VM restart race condition is solved. The only remaining part is
to ensure that we really connect to the new instance, instead of talking
to the old one (soon to be terminated).
FixesQubesOS/qubes-issues#1694
VM files may be already removed. Don't fail on this while removing a
VM, it's probably the reason why domain is being removed.
qvm-remove tool have its own guard for this, but it isn't enough - if
rmtree(dir_path) fails, storage.remove() would not be called, so
non-file storages would not be cleaned up.
This is also needed to correctly handle template reinstallation - where
VM directory is moved away to call create_on_disk again.
QubesOS/qubes-issues#2412
Just calling pool.init_volume isn't enough - a lot of code depends on
additional data loaded into vm.storage object. Provide a convenient
wrapper for this.
At the same time, fix loading extra volumes from qubes.xml - don't fail
on volume not mentioned in initial vm.volume_config.
QubesOS/qubes-issues#2256
Set parameters for possibly hiding domain's real IP before attaching
network to it, otherwise we'll have race condition with vif-route-qubes
script.
QubesOS/qubes-issues#1143
This is the IP known to the domain itself and downstream domains. It may
be a different one than seen be its upstream domain.
Related to QubesOS/qubes-issues#1143`
This helps hiding VM IP for anonymous VMs (Whonix) even when some
application leak it. VM will know only some fake IP, which should be set
to something as common as possible.
The feature is mostly implemented at (Proxy)VM side using NAT in
separate network namespace. Core here is only passing arguments to it.
It is designed the way that multiple VMs can use the same IP and still
do not interfere with each other. Even more: it is possible to address
each of them (using their "native" IP), even when multiple of them share
the same "fake" IP.
Original approach (marmarek/old-qubes-core-admin#2) used network script
arguments by appending them to script name, but libxl in Xen >= 4.6
fixed that side effect and it isn't possible anymore. So use QubesDB
instead.
From user POV, this adds 3 "features":
- net/fake-ip - IP address visible in the VM
- net/fake-gateway - default gateway in the VM
- net/fake-netmask - network mask
The feature is enabled if net/fake-ip is set (to some IP address) and is
different than VM native IP. All of those "features" can be set on
template, to affect all of VMs.
Firewall rules etc in (Proxy)VM should still be applied to VM "native"
IP.
FixesQubesOS/qubes-issues#1143
Instead of excerpt from /proc/meminfo, use just one integer. This make
qmemman handling much easier and ease implementation for non-Linux OSes
(where /proc/meminfo doesn't exist).
For now keep also support for old format.
FixesQubesOS/qubes-issues#1312
There is no point in changing *public API* for just a change without any
better reason. It turned out most of those settings will be the same in
Qubes 4.0, so keep names the same.
This reverts commit 2d6ad3b60c.
QubesOS/qubes-issues#1812
This is migration of core2 commits:
commit d0ba43f253
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date: Mon Jun 6 02:21:08 2016 +0200
core: start guid as normal user even when VM started by root
Another attempt to avoid permissions-related problems...
QubesOS/qubes-issues#1768
commit 89d002a031
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date: Mon Jun 6 02:19:51 2016 +0200
core: use runuser instead of sudo for switching root->user
There are problems with using sudo in early system startup
(systemd-logind not running yet, pam_systemd timeouts). Since we don't
need full session here, runuser is good enough (even better: faster).
commit 2265fd3d52
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date: Sat Jun 4 17:42:24 2016 +0200
core: start qubesdb as normal user, even when VM is started by root
On VM start, old qubesdb-daemon is terminated (if still running). In
practice it happen only at VM startart (shutdown and quickly start
again). But in that case, if the VM was started by root, such operation
would fail.
So when VM is started by root, make sure that qubesdb-daemon will be
running as normal user (the first user in group 'qubes' - there should
be only one).
FixesQubesOS/qubes-issues#1745
* core3-devices:
Fix core2migration and tests for new devices API
tests: more qubes.devices tests
qubes/ext/pci: implement pci-no-strict-reset/BDF feature
qubes/tools: allow calling qvm-device as qvm-devclass (like qvm-pci)
qubes: make pylint happy
qubes/tools: add qvm-device tool (and tests)
tests: load qubes.tests.tools.qvm_ls
tests: PCI devices tests
tests: add context manager to catch stdout
qubes/ext/pci: move PCI devices handling to an extension
qubes/devices: use more detailed exceptions than just KeyError
qubes/devices: allow non-persistent attach
qubes/storage: misc fixes for VM-exposed block devices handling
qubes: new devices API
FixesQubesOS/qubes-issues#2257
Implement required event handlers according to documentation in
qubes.devices.
A modification of qubes.devices.DeviceInfo is needed to allow dynamic,
read-only properties.
QubesOS/qubes-issues#2257
Allow device plugin to list attached and available devices. Enforce
at API level every device being exposed by some domain.
This commit only changes devices API, but not update existing users
(pci) yet.
QubesOS/qubes-issues#2257
- fix assigning 'template' property - do not do it if VM already have it
set
- cap default maxmem at 4000, as we clamp it to 10*memory anyway (and
default memory is 400)
- DispVM is no longer a special case for storage
- Add missing 'rw=True' for volatile volume
- Handle storage initialization (copy&paste from AppVM)
- Clone properties from DispVM template
QubesOS/qubes-issues#2253
This directory is not only for disk images (in fact disk images may be
elsewhere depending on choosen volume pool), so it would be cleaner to
handle (create/remove) it directly in QubesVM class.
Apparently the most important (the only?) property required in offline
mode is "is_running". So let's patch it to return False and make sure
any other libvirt usage would result in failure.
Or maybe better simply returh False in vm.is_running, when libvirt
connection fails? But then it would not be possible to use offline mode
and have (some, probably unrelated) libvirtd running at the same time.
FixesQubesOS/qubes-issues#2008
- Remove old qvm-remove
- Remove a log line from Storage, because it prints confusing lines, like:
Removing volume kernel: /var/lib/qubes/vm-kernels/4.1.13-6/modules.img
This commit eliminates import statements happening in the middle of the
file (between two classes definition). The cycles are still there. The
only magic module is qubes itself.
- Remove all *_dev_config methods
- Checks if a storage image exists moved to XenPool
- Storage.remove wraps Pool.remove()
- Stop volumes on domain sutdown/kill
- Warn when using deprecated methods
- introduce 'firewall-changed' event
- add reload_firewall_for_vm stub function
Should that function be private, called only from appropriate event
handlers?
QubesOS/qubes-issues#1815
It may be (and by default is) path relative to VM directory.
This code will be gone in the final version, after merging firewall
configuration into qubes.xml. But for now have something testable.
It is much less error-prone way. Previous approach didn't worked because
VMs weren't added here at 'domain-init'/'domain-loaded' event. And even
after adding such handlers it wasn't working because of
QubesOS/qubes-issues#1816.
It may be a little slower, but since it isn't used so often
(starting/stopping VM and reloading firewall), shouldn't be a problem.