Commit Graph

326 Commits

Author SHA1 Message Date
Marek Marczykowski-Górecki
f9593ce3e6
vm: allow files in kernels_dir override built-in default kernelopts
If default-kernelopts-pci.txt is present, it will override default
built-in kernelopts for the VMs with PCI device assigned.
Similarly if default-kernelopts-nopci.txt is present, it will override
default kernelopts for VMs without PCI devices.
For template-based VMs, kernelopts of the template takes precedence over
default-kernelopts-nopci.txt but not default-kernelopts-pci.txt.

Fixes QubesOS/qubes-issues#4839
2019-02-23 12:53:49 +01:00
Marek Marczykowski-Górecki
a9ec2bb2c3
vm/qubesvm: fix race condition in failed startup handling
Instead of checking if domain is still running/paused, try to kill it
anyway and ignore appropriate exception. Otherwise domain could die
before the check and killing.
2019-01-19 03:25:20 +01:00
Marek Marczykowski-Górecki
3728230e3c
Merge branch 'maxmem' 2018-12-09 18:38:21 +01:00
AJ Jordan
d4e567cb10
Fix typo 2018-12-06 20:43:39 -05:00
Marek Marczykowski-Górecki
7d1bcaf64c Introduce management_dispvm property
The new property is meant for management stack (Salt) to set which DVM
template should be used to maintain given VM. Since the DispVM based on
it will be given ultimate control over target VM (qubes.VMShell
service), it should be trusted. The one pointed to by default_dispvm
not necessary is one.

The property defaults to the value from the template (if any), and then
to a global management_dispvm property. By default it is set to None.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
2018-12-03 19:18:26 +01:00
Marek Marczykowski-Górecki
4dc8631010
Use maxmem=0 to disable qmemman, add more automation to it
Use maxmem=0 for disabling dynamic memory balance, instead of cryptic
service.meminfo-writer feature. Under the hood, meminfo-writer service
is also set based on maxmem property (directly in qubesdb, not
vm.features dict).
Having this as a property (not "feature"), allow to have sensible
handling of default value. Specifically, disable it automatically if
otherwise it would crash a VM. This is the case for:
 - domain with PCI devices (PoD is not supported by Xen then)
 - domain without balloon driver and/or meminfo-writer service

The check for the latter is heuristic (assume presence of 'qrexec' also
can indicate balloon driver support), but it is true for currently
supported systems.

This also allows more reliable control of libvirt config: do not set
memory != maxmem, unless qmemman is enabled.

memory != maxmem only makes sense if qmemman for given domain is
enabled.  Besides wasting some domain resources for extra page tables
etc, for HVM domains this is harmful, because maxmem-memory difference
is made of Popupate-on-Demand pool, which - when depleted - will kill
the domain. This means domain without balloon driver will die as soon
as will try to use more than initial memory - but without balloon driver
it sees maxmem memory and doesn't know about the lower limit.

Fixes QubesOS/qubes-issues#4135
2018-11-21 02:13:25 +01:00
Marek Marczykowski-Górecki
35a53840f1
vm: send domain-start-failed event also if some device is missing
Checking device presence wasn't covered with try/except that send the
event.
2018-11-15 18:25:29 +01:00
Marek Marczykowski-Górecki
0eab082d85
ext/core-features: make 'template-postinstall' event async
It makes a lot of sense to call long-running operations in that event
handler, including calling back into the VM. Allow that by using
fire_event_async, not just fire_event.

Also, document the event.
2018-11-15 18:25:29 +01:00
Marek Marczykowski-Górecki
328697730b
vm: fix deadlock on qrexec timeout handling
vm.kill() will try to get vm.startup_lock, so it can't be called while
holding it already.
Fix this by extracting vm._kill_locked(), which expect the lock to be
already taken by the caller.
2018-11-04 17:05:55 +01:00
Marek Marczykowski-Górecki
f13029219b
vm: disable/enable qubes-vm@ service when domain is removed/created
If domain is set to autostart, qubes-vm@ systemd service is used to
start it at boot. Cleanup the service when domain is removed, and
similarly enable the service when domain is created and already have
autostart=True.

Fixes QubesOS/qubes-issues#4014
2018-10-27 16:44:53 +02:00
Marek Marczykowski-Górecki
2c1629da04
vm: call after-shutdown cleanup also from vm.kill and vm.shutdown
Cleaning up after domain shutdown (domain-stopped and domain-shutdown
events) relies on libvirt events which may be unreliable in some cases
(events may be processed with some delay, of if libvirt was restarted in
the meantime, may not happen at all). So, instead of ensuring only
proper ordering between shutdown cleanup and next startup, also trigger
the cleanup when we know for sure domain isn't running:
 - at vm.kill() - after libvirt confirms domain was destroyed
 - at vm.shutdown(wait=True) - after successful shutdown
 - at vm.remove_from_disk() - after ensuring it isn't running but just
 before actually removing it

This fixes various race conditions:
 - qvm-kill && qvm-remove: remove could happen before shutdown cleanup
 was done and storage driver would be confused about that
 - qvm-shutdown --wait && qvm-clone: clone could happen before new content was
 commited to the original volume, making the copy of previous VM state
(and probably more)

Previously it wasn't such a big issue on default configuration, because
LVM driver was fully synchronous, effectively blocking the whole qubesd
for the time the cleanup happened.

To avoid code duplication, factor out _ensure_shutdown_handled function
calling actual cleanup (and possibly canceling one called with libvirt
event). Note that now, "Duplicated stopped event from libvirt received!"
warning may happen in normal circumstances, not only because of some
bug.

It is very important that post-shutdown cleanup happen when domain is
not running. To ensure that, take startup_lock and under it 1) ensure
its halted and only then 2) execute the cleanup. This isn't necessary
when removing it from disk, because its already removed from the
collection at that time, which also avoids other calls to it (see also
"vm/dispvm: fix DispVM cleanup" commit).
Actually, taking the startup_lock in remove_from_disk function would
cause a deadlock in DispVM auto cleanup code:
 - vm.kill (or other trigger for the cleanup)
   - vm.startup_lock acquire   <====
     - vm._ensure_shutdown_handled
       - domain-shutdown event
         - vm._auto_cleanup (in DispVM class)
           - vm.remove_from_disk
             - cannot take vm.startup_lock again
2018-10-26 23:54:08 +02:00
Marek Marczykowski-Górecki
e1f65bdf7b
vm: add shutdown_timeout property, make vm.shutdown(wait=True) use it
vm.shutdown(wait=True) waited indefinitely for the shutdown, which makes
useless without some boilerplate handling the timeout. Since the timeout
may depend on the operating system inside, add a per-VM property for it,
with value inheritance from template and then from global
default_shutdown_timeout property.

When timeout is reached, the method raises exception - whether to kill
it or not is left to the caller.

Fixes QubesOS/qubes-issues#1696
2018-10-26 23:54:04 +02:00
Marek Marczykowski-Górecki
58bcec2a64
qubesvm: improve error message about same-pool requirement
Make it clear that volume creation fails because it needs to be in the
same pool as its parent. This message is shown in context of `qvm-create
-p root=MyPool` for example and the previous message didn't make sense
at all.

Fixes QubesOS/qubes-issues#3438
2018-10-18 00:03:05 +02:00
Marek Marczykowski-Górecki
ba210c41ee
qubesvm: don't crash VM creation if icon symlink already exists
It can be leftover from previous failed attempt. Don't crash on it, and
replace it instead.

QubesOS/qubes-issues#3438
2018-10-18 00:01:45 +02:00
Rusty Bird
bee69a98b9
Add default_qrexec_timeout to qubes-prefs
When a VM (or its template) does not explicitly set a qrexec_timeout,
fall back to a global default_qrexec_timeout (with default value 60),
instead of hardcoding the fallback value to 60.

This makes it easy to set a higher timeout for the whole system, which
helps users who habitually launch applications from several (not yet
started) VMs at the same time. 60 seconds can be too short for that.
2018-09-16 18:42:48 +00:00
Rusty Bird
b3983f5ef8
'except FileNotFoundError' instead of ENOENT check 2018-09-13 19:46:45 +00:00
Marek Marczykowski-Górecki
7f1e2741ec
Merge remote-tracking branch 'qubesos/pr/228'
* qubesos/pr/228:
  storage/lvm: filter out warning about intended over-provisioning
  tests: fix getting kernel package version inside VM
  tests/extra: add start_guid option to VMWrapper
  vm/qubesvm: fire 'domain-start-failed' event even if fail was early
  vm/qubesvm: check if all required devices are available before start
  storage/lvm: fix reporting lvm command error
  storage/lvm: save pool's revision_to_keep property
2018-09-07 01:06:59 +02:00
Marek Marczykowski-Górecki
ee25f7c7bb
vm: fix error reporting on PVH without kernel set
Fixes QubesOS/qubes-issues#4254
2018-09-03 00:23:05 +02:00
Jean-Philippe Ouellet
e95ef5f61d
Add domain-paused/-unpaused events
Needed for event-driven domains-tray UI updating and anti-GUI-DoS
usability improvements.

Catches errors from event handlers to protect libvirt, and logs to
main qubesd logger singleton (by default meaning systemd journal).
2018-08-01 05:41:50 -04:00
Marek Marczykowski-Górecki
e51efcf980
vm: document domain-start-failed event 2018-07-16 22:02:59 +02:00
Marek Marczykowski-Górecki
af2435c0d4
Make some properties default to template's value (if any)
Multiple properties are related to system installed inside the VM, so it
makes sense to have them the same for all the VMs based on the same
template. Modify default value getter to first try get the value from a
template (if any) and only if it fails, fallback to original default
value.
This change is made to those properties:
 - default_user (it was already this way)
 - kernel
 - kernelopts
 - maxmem
 - memory
 - qrexec_timeout
 - vcpus
 - virt_mode

This is especially useful for manually installed templates (like
Windows).

Related to QubesOS/qubes-issues#3585
2018-07-16 22:02:58 +02:00
Marek Marczykowski-Górecki
be2465c1f9
Fix issues found by pylint 2.0
Resolve:
 - no-else-return
 - useless-object-inheritance
 - useless-return
 - consider-using-set-comprehension
 - consider-using-in
 - logging-not-lazy

Ignore:
 - not-an-iterable - false possitives for asyncio coroutines

Ignore all the above in qubespolicy/__init__.py, as the file will be
moved to separate repository (core-qrexec) - it already has a copy
there, don't desynchronize them.
2018-07-15 23:51:15 +02:00
Marek Marczykowski-Górecki
6a191febc3
vm/qubesvm: fire 'domain-start-failed' event even if fail was early
Fire 'domain-start-failed' even even if failure occurred during
'domain-pre-start' event. This will make sure if _anyone_ have seen
'domain-pre-start' event, will also see 'domain-start-failed'. In some
cases it will look like spurious 'domain-start-failed', but it is safer
option than the alternative.
2018-04-13 16:07:32 +02:00
Marek Marczykowski-Górecki
ba82d9dc21
vm/qubesvm: check if all required devices are available before start
Fail the VM start early if some persistently-assigned device is missing.
This will both save time and provide clearer error message.

Fixes QubesOS/qubes-issues#3810
2018-04-13 16:03:42 +02:00
Marek Marczykowski-Górecki
93b2424867
vm/qubesvm: fix missing icon handling in clone_disk_files()
Check for icon existence, not a directory for it.
2018-04-06 12:10:50 +02:00
Marek Marczykowski-Górecki
f4be284331
vm/qubesvm: handle libvirt reporting domain already dead when killing
If domain die when trying to kill it, qubesd may loose a race and try to
kill it anyway. Handle libvirt exception in that case and conver it to
QubesVMNotStartedError - as it would be if qubesd would win the race.

Fixes QubesOS/qubes-issues#3755
2018-04-02 23:56:03 +02:00
Marek Marczykowski-Górecki
1e9bf18bcf
Typo fix 2018-04-02 23:24:30 +02:00
Marek Marczykowski-Górecki
7c4566ec14
vm/qubesvm: allow 'features-request' to have async handlers
Some handlers may want to call into other VMs (or even the one asking),
but vm.run() functions are coroutines, so needs to be called from
another coroutine. Allow for that.
Also fix typo in documentation.
2018-03-02 01:16:38 +01:00
Marek Marczykowski-Górecki
ba5d19e1b4
vm: provide better error message for VM startup timeout
"Cannot execute qrexec-daemon!" error is very misleading for a startup
timeout error, make it clearer. This rely on qrexec-daemon using
distinct exit code for timeout error, but even without that, include its
stderr in the error message.
2018-02-27 04:35:05 +01:00
Marek Marczykowski-Górecki
716114f676
Merge remote-tracking branch 'qubesos/pr/197'
* qubesos/pr/197:
  Don't fire domain-stopped/-shutdown while VM is still Dying
2018-02-22 21:14:55 +01:00
Rusty Bird
f96fd70f76
Don't fire domain-stopped/-shutdown while VM is still Dying
Lots of code expects the VM to be Halted after receiving one of these
events, but it could also be Dying or Crashed. Get rid of the Dying case
at least, by waiting until the VM has transitioned out of it.

Fixes e.g. the following DispVM cleanup bug:

    $ qvm-create -C DispVM --prop auto_cleanup=True -l red dispvm
    $ qvm-start dispvm
    $ qvm-shutdown --wait dispvm  # this won't remove dispvm
    $ qvm-start dispvm
    $ qvm-kill dispvm  # but this will
2018-02-22 19:53:29 +00:00
Christopher Laprise
75d8c553f9
Fix is_running non-boolean 2018-02-20 22:30:47 -05:00
Yassine Ilmi
a0d45aac9c
replaced underscore by dash and update test accordingly 2018-02-01 00:50:42 +00:00
Yassine Ilmi
1c3b412ef8
Added the default_user property from the Qube to the qubesdb so it is available when starting X. This is the 1st part of a fix for issue https://github.com/QubesOS/qubes-issues/issues/2372 2018-02-01 00:12:51 +00:00
Marek Marczykowski-Górecki
86026e364f
Fix starting PCI-having HVMs on early system boot and later
1. Make sure VMs are started after dom0 actual memory usage is reported
to qmemman, otherwise dom0 will hold 4GB, even if just a little over 1GB
is needed at that time.

2. Request only vm.memory MB from qmemman, instead of vm.maxmem. While
HVM with PCI devices indeed do not support populate-on-demand, this is
already handled in libvirt XML.

The later may often cause VM startup fail on systems with 8GB of memory,
because maxmem is 4GB there and with dom0 keeping the other 4GB (see
point 1) there is not enough memory to start any sych VM.

Fixes QubesOS/qubes-issues#3462
2018-01-29 22:57:32 +01:00
Marek Marczykowski-Górecki
eb846f6647
Merge remote-tracking branch 'qubesos/pr/187'
* qubesos/pr/187:
  Don't fail create/clone if /var/lib/qubes/TYPE/NAME/ exists
  Make 'qvm-volume revert' really use the latest revision
  Fix wrong mocks of Volume.revisions
2018-01-22 15:39:13 +01:00
Marek Marczykowski-Górecki
74eb3f3208
Merge remote-tracking branch 'qubesos/pr/185'
* qubesos/pr/185:
  vm: remove doc for non-existing event `monitor-layout-change`
  vm: include tag/feature name in event name
  events: add support for wildcard event handlers
2018-01-22 15:32:57 +01:00
Rusty Bird
4ae854fdaf
Don't fail create/clone if /var/lib/qubes/TYPE/NAME/ exists 2018-01-21 22:28:47 +00:00
Marek Marczykowski-Górecki
dce3b609b4
qubesvm: do not try to define libvirt object in offline mode
The idea is to not touch libvirt at all.
2018-01-18 17:36:37 +01:00
Marek Marczykowski-Górecki
7905783861
qubesvm: PVH minor improvements
- use capital letters in acronyms in documentation to match upstream
documentation.
- refuse to start a PVH with without kernel set - provide meaningful
error message
2018-01-16 21:42:20 +01:00
Marek Marczykowski-Górecki
4ff53879a0
vm/qubesvm: default to PVH unless PCI devices are assigned
Fixes QubesOS/qubes-issues#2185
2018-01-15 03:34:46 +01:00
Marek Marczykowski-Górecki
d9da747ab0
vm/qubesvm: expose 'start_time' property over Admin API
It is useful at least for Qubes Manager.
2018-01-12 05:34:46 +01:00
Marek Marczykowski-Górecki
85e80f2329
vm/qubesvm: revert backup_timestamp to '%s' format
Human readable format `str(datetime.datetime)` is a nightmare for Admin
API level communication. Especially setting the property in a format
that it was read was not supported, and handling such format in
untrusted input handling code is a bad idea. Revert to a simple intiger
format.
2018-01-12 05:34:45 +01:00
Marek Marczykowski-Górecki
f0fe02998b
vm: remove doc for non-existing event monitor-layout-change 2018-01-06 15:10:54 +01:00
Marek Marczykowski-Górecki
50d34755fa
vm: include tag/feature name in event name
Rename events:
 - domain-feature-set -> domain-feature-set:feature
 - domain-feature-delete -> domain-feature-delete:feature
 - domain-tag-add -> domain-tag-add:tag
 - domain-tag-delete -> domain-tag-delete:tag

Make it consistent with property-* events. It makes more sense to
include tag/feature name in event name, so handler can watch a single
tag/feature - which is the most common case. Otherwise, most handlers
would begin with `if feature == '...'` anyway, wasting time on most
events.

In cases where multiple features/tags should be handled by a single
handler, it is now possible to register a handler with wildcard, for
example `domain-feature-set:*`.
2018-01-06 15:05:34 +01:00
Marek Marczykowski-Górecki
32c6083e1c
Make pylint happy
Fix thing detected by updated pylint in Travis-CI
2017-12-21 18:19:10 +01:00
Marek Marczykowski-Górecki
faef890c9a
vm/qubesvm: write QubesDB /qubes-netvm-gateway6 entry when set
This is needed for network-providing VM to actually provide IPv6
connection too.

QubesOS/qubes-issues#718
2017-12-07 01:40:31 +01:00
Marek Marczykowski-Górecki
e12a66f103
vm/mix/net: use ipaddress module for ip and ip6 properties
It has built-in validation, which is much more elegant than custom regex
or socket call.

Suggested by @woju
QubesOS/qubes-issues#718
2017-12-07 01:40:31 +01:00
Marek Marczykowski-Górecki
18f159f8ec
Add IPv6 related VM properties
Add property for IPv6 address ('ip6'). Build default value similarly to
IPv4 - common prefix + QID or Disp ID (for DispVMs).
This all is disabled unless 'ipv6' feature is enabled. It is inherited
from netvm (not template).
Even when enabled, VM may decide to not use it - or simply not support
it.

QubesOS/qubes-issues#718
2017-12-07 01:40:30 +01:00
Marek Marczykowski-Górecki
da97f4d84c
qubesvm: make initial qmemman request consistent with libvirt config
If HVM have PCI device, it can't use PoD, so need 'maxmem' memory to be
started. Request that much from qmemman.
Note that is is somehow independent of enabling or not dynamic memory
management for the VM (`service.meminfo-writer` feature). Even if VM
initially had assigned maxmem memory, it can be later ballooned down.

QubesOS/qubes-issues#3207
2017-12-05 17:39:32 +01:00