This code is unused now. Theoretically this is_outdated implementation
should be moved to FileVolume, but since we don't have VM reference
there, it isn't possible to read appropriate xenstore entry. As we're
phasing out file pool, simply ignore it for now.
QubesOS/qubes-issues#2256
LinuxKernel pool support only read-only volumes, so save_on_stop=True
doesn't make sense. Make it more explicit - raise NotImplementedError
otherwise.
Also, migrate old configs where snap_on_start=True, but no source was
given.
QubesOS/qubes-issues#2256
This driver isn't used in default Qubes 4.0 installation, but if we do
have it, let it follow defined API and its own documentation. And also
explicitly reject not supported operations:
- support only revisions_to_keep<=1, but do not support revert() anyway
(implemented version were wrong on so many levels...)
- use 'save_on_stop'/'snap_on_start' properties directly instead of
obsolete volume types
- don't call sudo - qubesd is running as root
- consistently use path, path_cow, path_source, path_source_cow
Also, add tests for BlockDevice instance returned by
FileVolume.block_device().
QubesOS/qubes-issues#2256
Do not always use pool named 'default'. Instead, have global
`default_pool` property to specify default storage pools.
Additionally add `default_pool_*` properties for each VM property, so
those can be set separately.
QubesOS/qubes-issues#2256
Those functions are coroutines anyway, so allow event handlers to be
too.
Some of this (`domain-create-on-disk`, `domain-remove-from-disk`) will
be useful for appmenus handling.
This will allow starting processes and calling RPC services in those
events. This if required for usb devices, which are attached using RPC
services.
Intentionally keep device listing events synchronous only - to
discourage putting long-running actions there.
This change also require some not-async attach method version for
loading devices from qubes.xml - have `load_persistent` for this.
commit/recover/reset should really be handled in start/stop. Nothing
stops specific pool implementation to define such functions privately.
QubesOS/qubes-issues#2256
Always define those properties, always include them in volume config.
Also simplify overriding pool based on volume type defined by those:
override pool unless snap_on_start=True.
QubesOS/qubes-issues#2256
Since libvirt do provide object for dom0 too, return it here.
It's much easier than special-casing AdminVM everywhere. And in fact
sometimes it is actually useful (for example attaching devices from/to
dom0, adjusting memory).
The first operation returns a token, which can be passed to the second
one to actually perform clone operation. This way the caller needs have
power over both source and destination VMs (or at least appropriate
volumes), so it's easier to enforce appropriate qrexec policy.
The pending tokens are stored on Qubes() instance (as QubesAdminAPI is
not persistent). It is design choice to keep them in RAM only - those
are one time use and this way restarting qubesd is a simple way to
invalidate all of them. Otherwise we'd need some additional calls like
CloneCancel or such.
QubesOS/qubes-issues#2622
Add convenient collection wrapper for easier getting selected volume.
Storage pool implementation may still provide only volume listing
function (pool.list_volumes), or, additionally, optimized
pool.get_volume.
This means it is both possible to iterate over volumes:
```python
for volume in pool.volumes:
...
```
And get a single volume:
```python
volume = pool.volumes[vid]
```
QubesOS/qubes-issues#2256
In the end firewall is implemented as .Get and .Set rules, with policy
statically set to 'drop'. This way allow atomic firewall updates.
Since we already have appropriate firewall format handling in
qubes.firewall module - reuse it from there, but adjust the code to be
prepared for potentially malicious input. And also mark such variables
with untrusted_ prefix.
There is also third method: .Reload - which cause firewall reload
without making any change.
QubesOS/qubes-issues#2622FixesQubesOS/qubes-issues#2869
There is a problem with having separate default action ("policy") and
rules because it isn't possible to set both of them atomically at the
same time.
To solve this problem, always have policy 'drop' (as a safe default),
but by default have a single rule with action 'accept'
FixesQubesOS/qubes-issues#2869
Do this for all standard property types - even if other types do
additional validation, do not expose them to non-ASCII characters.
QubesOS/qubes-issues#2622
We've decided to make VM name immutable. This is especially important
for Admin API, where some parts (especially policy) are sticked to the
VM name.
Now, to rename the VM, one need to clone it under new name (thanks to
LVM, this is very quick action), then remove the old one.
FixesQubesOS/qubes-issues#2868
meminfo (written by VM) is expected report KiB, but qmemman internally
use bytes. Convert units.
And also move obscure unit conversion in is_meminfo_suspicious to more
logical place in sanitize_and_parse_meminfo.
Check if exit code retrieved from dom0 is really the one expected.
Fix typo in test_065_qrexec_exit_code_vm (testvm1/testvm2), adjust for
reporing remote exit code and remove expectedFailure.
QubesOS/qubes-issues#2861
Since tests expose qubesd socket, qvm-start-gui should handle starting
GUI daemons (so, GUI session inside VM). Add synchronization with it
using qubes.WaitForSession service.
When test expect to wait for remote process, use vm.run_for_stdio.
Additionally, when the call fail, (stdout, stderr) is not assigned - use
the one attached to exception object instead.
Since run_for_stdio raise an exception for non-zero exit code, it isn't
ignored anymore. So, check if qrexec-client-vm return expected value,
instead of keep ignoring it.
QubesOS/qubes-issues#2861
Test suite creates some VMs and needs to pass the knowledge about them
to qrexec policy checker. This is done using Admin API, so we need to
substitute qubesd with our own API server.
register_event_handlers is called early, when libvirt connection may not
be yet established - especially on empty qubes.xml. Do not skip
automatic connection logic.
* qubesos/pr/111:
vm: drop 'internal' property
qmemman: make sure to release lock
qmemman: fix meminfo parsing for python 3
devices: drop 'data' and 'frontend_domain' fields, rename 'devclass' to 'bus'
* qubesos/pr/110:
storage: use direct object references, not only identifiers
vm: fix volume_config
storage/lvm: prefix VM LVM volumes with 'vm-'
storage: fix VM rename
Make qubes.NotifyTools reuse logic of qubes.FeaturesRequest, then move
actual request processing to 'features-request' event handler. At the
same time implement handling 'qrexec' and 'gui' features request -
allowing to set template features when wasn't already there.
Behavior change: template is no longer allowed to change feature value
(regardless of being True or False). This means the user will always be
able to override what template have set.
Drop DeviceInfo.data - device extension should provide a subclass with
proper individual fields.
Drop DeviceAssignment.frontend_domain - this information is redundant -
frontend domain is defined by where DeviceAssignment is attached.
Rename DeviceCollection.devclass to bus - devclass if confusing here,
because this term is also used for DeviceInfo subclass.
Reference objects, not their IDs - this way when object is modified, it
is visible everywhere where it is used. Main changes:
- volume.pool - Pool object
- volume.source - Volume object
Since volume have Pool object reference now, move volume related
functions into Volume class (from Pool class). This avoids horrible
`storage.get_pool(volume).something(volume)` construct.
One issue here is since volume.source reference a Volume object from a
different VM - VM's template, now VM load order is important. Since we
don't have control over it, initialize vm.storage when needed - possibly
while initializing storage of different VM. Since we don't have cycles
in AppVM-TemplateVM dependencies, it is safe.
Also, since this commit, volume.source (if defined) always points at
volume of the same name from VM's template. Using volumes with something
else as a source is no longer supported.
QubesOS/qubes-issues#2256
- kernel volume shouldn't have snap_on_start, it's read-only volume
anyway
- root volume of AppVM should have placeholder for 'source'
- private volume of AppVM should _not_ have placeholder for 'source'
(it's ignored anyway, because snap_on_start=False)
QubesOS/qubes-issues#2256
When VM is renamed only volume.vid get updated, but not other attributes
calculated from it. Convert them to dynamic properties to not worry
about it.
QubesOS/qubes-issues#2256
With libvirt in place, this isn't enough - libvirt also keep VM
configuration in its memory and adjusting xenstore doesn't change that.
In fact changing xenstore behind it back make it even worse in some
situations.
QubesOS/qubes-issues#1426
Re-init volume config of all 'snap_on_start' volumes at template
chanage. For this, save original volume config and re-use
config_volume_from_source function introduced in previous commit.
At the same time, forbid changing template of running AppVM or any
DispVM.
QubesOS/qubes-issues#2256
vm.kernel property have type 'str'. Putting None there makes a lot of
troubles: it gets encoded as 'None' in qubes.xml and then loaded back as
'None' string, not None value. Also it isn't possible to assign None
value to str property throgh Admin API.
kernel='' is equally good to specify "no kernel from dom0".
QubesOS/qubes-issues#2622
While libvirt handle locking itself, there is also Qubes-specific
startup part. Especially starting qrexec-daemon and waiting until
qrexec-agent connect to it. When someone will attempt to start VM the
second time (or simply assume it's already running) - qrexec will not be
connected yet and the operation will fail. Solve the problem by wrapping
the whole vm.start() function with a lock, including a check if VM is
running and waiting for qrexec.
Also, do not throw exception if VM is already running.
This way, after a call to vm.start(), VM will be started with qrexec
connected - regardless of who really started it.
Note that, it will not solve the situation when someone check if VM is
running manually, like:
if not vm.is_running():
yield from vm.start()
Such code should be changed to simply:
yield from vm.start()
FixesQubesOS/qubes-issues#2001FixesQubesOS/qubes-issues#2666
And place them in /qubes-service/ QubesDB directory. This allows
extensions to easily store some data not exposed to VM, but also have
control what VM will see. And at the same time, it make it compatible
with existing services framework
QubesOS/qubes-issues#1637
Setting VMProperty to None VM should be encoded as '' value (according
to VMProperty._none_value). But value validation rejected this value.
QubesOS/qubes-issues#2622
Implement this in two parts:
1. Permissions checks, getting a path from appropriate storage pool
2. Actual data import
The first part is done by qubesd in a standard way, but then, instead of
accepting all the data (which may be several GB), return a path to which
a shell script (in practice: `dd` command) will write the data.
Then the script call back to qubesd again to report success/failure and
qubesd response from that call is actually returned to the user.
This way we do not pass all the data through qubesd, but still can
control the process from there in a meaningful way. Note that the last
part (second call to qubesd) may perform all kind of verification (like
a signature check on the data, or so) and can also prevent VM from
starting (hooking also domain-pre-start event) from not verified image.
QubesOS/qubes-issues#2622
Allow importing not only from another volume, but also raw data. In
practice, for all currently implemented storage pools, this is the same
as Pool.export, because path returned there is read-write. But lets not
abuse this fact, some future implementation may need different methods.
QubesOS/qubes-issues#2622QubesOS/qubes-issues#2256
Remove debug prints, log full traceback (of handled exception) only when
debug mode enabled (--debug, introduce in this commit too).
--debug option also enables sending tracebacks to the API clients.
QubesOS/qubes-issues#853
Accessing non-existing property is a common action (for example
hasattr() do try to access the property). So, introduce specific
exception, inheriting from AttributeError. It will behave very similar
to standard (non-Admin-API) property access.
This exception is reported to the Admin API user, so it will be possible
to distinguish between non-existing property and access denied. But it
isn't any significant information leak, as list of valid properties is
publicly available in the source code.
QubesOS/qubes-issues#853
Use '<option name="option_name">option_value</option>' instead of
'<options option_name="option_value"/>'. It's more consistent with the
rest of qubes.xml - have one thing per element.
Also, add options deserialization test.
When libvirt domain is not defined, it isn't running for sure.
This commit fixes the case when vm.is_running() appears anywhere in the
code used during libvirt xml building. In this case, it's mostly about
PCI device description for libvirt.
Make it easy to retrieve DeviceInfo object out of DeviceAssignment
object. The only missing piece of information for that is device class,
so add it. Make it optional, as it can be filled on demand when passing
the object through DeviceCollection (either by listing devices, or
attaching/detaching).
This is mostly to ease handling options in libvirt template - to get
them, you need to use `assignments()`, istead of `persistent()` or
`attached()`, but there were no _simple_ way of getting actual device
object.
This also makes DeviceCollection._device method not needed anymore.
Filter in python3 returns a generator, can be iterated only once.
This is about list of existing domains - store it as a list, otherwise
domains will "disappear" after being discovered.
qubesd do start other daemons - make sure they will not try to signal
systemd about it. In some cases such daemons (qubesdb-daemon) behave
differently based on this variable.
Do not initialize it only at qubes.xml load time, but re-read vm.kernel
property each time the path is constructed. While at it, add support for
vm.kernel set to 'None' - simply don't include modules.img (xvdd) then.
When device extension do not return some "persistent" device as
currently attached, still return it, as it will be attached at next
domain startup. User can distinguish such devices by having
frontend_domain=None (or other VM).
Also, return a set from DeviceCollection.assignments().
While at it, adjust implementation to specification: tags don't have
value, only one bit of information (present/not present).
FixesQubesOS/qubes-issues#2686
This requires creating LVM volume group, so create on based on loop dev
in /tmp.
This is rather rough, but if any of this fails, run the tests anyway -
it will simply skip LVM tests.
Device ident may contain only characters allowed in qrexec argument.
This will allow using it directly in qrexec argument in Attach/Detach
methods.
This also means PCI extension will need to be updated (it uses ':' in
ident).
QubesOS/qubes-issues#853
This is required to get shutdown notification, when it wasn't initiated
by qubesd (for example 'poweroff' command inside of VM).
Libvirt event loop implementation must be registered before making
connection to libvirt, so move it to the beginning of main().
For now, only 'domain-shutdown' event is emited.
User session may not be started at all (for example no qubes packages
installed there), so don't block it in all the cases. Also this would
prevent running 'qubes.WaitForSession' service...
In practice, default value for 'gui' argument is False, so in most cases
user session will be ignored. Which doesn't matter in most cases -
especially for services called by qubesd.
Keep it uniform - QubesVM() object is responsible for handling
vm.dir_path, Storage() is responsible for handling disk volumes (which
may live in that directory
QubesOS/qubes-issues#2256
Allow specific pool implementation to provide asynchronous
implementation. vm.storage.* methods will detect if given implementation
is synchronous or asynchronous and will act accordingly.
Then it's up to pool implementation how asynchronous should be achieved.
Do not force it using threads (`run_in_executor()`). But pool
implementation is free to use threads, if consider it safe in a
particular case.
This commit does not touch any pool implementation - all of them are
still synchronous.
QubesOS/qubes-issues#2256
Split actual filtering done by mgmt-permission: events into calling an
event and applying returned filters. This way filtering done in
mgmt.Events handler could reuse the same function.
...in core-mgmt-client repository. qubesd isn't the right place to start
GUI applications, which will be even more important when GUI domain will
be something different than Dom0.
QubesOS/qubes-issues#833
Some methods inherited from dict (pop and setdefault here) are covered
by placeholders raising NotImplementedError. Lets fix their signatures
(to match those of dict) to really get NotImplementedError, instead of
TypeError.
This allows avoid race condition between registering event handlers and
performing some action. The important thing is the event sent after
registering event handlers in qubesd. This means state changes (like
VM start/stop) after 'connection-established' event will be included in
event stream.
QubesOS/qubes-issues#2622
Management API gives access only to qubes.property. And this is
actually a good thing, so instead of extending it to access also
builtins.property, add a simple decorator to define read-only, stateless
qubes.property.
QubesOS/qubes-issues#2622
qvm-ls tool (as all other tools) will be accessing properties through
API, so no need (nor sense) for this tool-specific attributes in
qubes.property. The only somehow used was ls_width, and in fact it made
the output unnecessary wide.
The tool itself is already moved to core-mgmt-client repository.
QubesOS/qubes-issues#853
Standard methods return only one value, after operation is completed,
but events-related methods may return multiple values during the method
execution time. Provide a callback for such cases.
Also, according to specification, avoid sending both event and non-event
values.
QubesOS/qubes-issues#2622
Allow method handler to decide if operation could be cancelled. If yes,
when connection to the qubesd is terminated (and
protocol.connection_lost get called) the operation is cancelled using
standard asyncio method - in which case asyncio.CancelledError is thrown
inside method handler. This needs to be explicitly enabled, because
cancellable methods are much harder to write, to maintain consistent
system state.
Caveat: protocol.connection_lost is called only when trying to send some
data to it (and it's already terminated). Which makes this whole
mechanism useful only for events. Otherwise, when sending some data (and
possibly detecting that connection is broken), the operation is already
completed.
QubesOS/qubes-issues#2622
* kalkin/device-assignments: (21 commits)
PCI extension cache PCIDevice objects
Make pylint ♥
Fix pylint warning no-else-return
Fix pylint warning len-as-conditional
device-list-attached event returns a dev/options tupples list
DeviceAssignment options are now a dict
Remove WrongAssignment exception
Rename qubes.devices.BlockDevice to qubes.storage.BlockDevice
Update relaxng devices option element
Fix tests broken by the new assignment api
Fix qubes.tests.devices
Fix pci device integration tests
qvm-device add support for assignments
Update ext/pci to new api
BaseVM add DeviceAssignment xml serialization
Migrate DeviceCollection to new API
Add PersistentCollection helper to qubes.devices
Add DeviceAssignment
qvm-device validates device parameters
qvm-device fix handling of non block devices
...
* core3-policy:
Make pylint happy
tests: disable GTK tests on travis
qubespolicy: make pylint happy
qubespolicy: run GUI code inside user session and expose it as dbus object
tests: plug rpc-window tests into main test runner
qubespolicy: plug GUI code into qrexec-policy tool
rpm: add rpc-window related files to package
rpc-window: adjust for qubespolicy API
rpc-window: use pkg_resources for glade file
rpc-window: use 'edit-find' icon if no other is found
rpc-window: adjust for python3
rpc-window: code style adjustments
Import new rpc confirmation window code
qubesd: add second socket for in-dom0 internal calls
policy: qrexec-policy cli tool
tests: qubespolicy tests
qubespolicy: initial version for core3
vm/appvm: add dispvm_allowed property
dispvm: don't load separate Qubes() instance when handling DispVM
This socket (and commands) are not exposed to untrusted input, so no
need to extensive sanitization. Also, there is no need to provide a
stable API here, as those methods are used internally only.
QubesOS/qubes-issues#853
0) All those methods are now awaitable rather than synchronous.
1) The base method is run_service(). The method run() was rewritten
using run_service('qubes.VMShell', input=...). There is no provision
for running plain commands.
2) Get rid of passio*= arguments. If you'd like to get another return
value, use another method. It's as simple as that.
See:
- run_service_for_stdio()
- run_for_stdio()
Also gone are wait= and localcmd= arguments. They are of no use
inside qubesd.
3) The qvm-run tool and tests are left behind for now and will be fixed
later. This is because they also need event loop, which is not
implemented yet.
fixesQubesOS/qubes-issues#1900QubesOS/qubes-issues#2622
- Get rid of @not_in_api, exchange for explicit @api() decorator.
- Old @no_payload decorator becomes an argument (keyword-only).
- Factor out AbstractQubesMgmt class to be a base class for other mgmt
backends.
- Use async def instead of @asyncio.coroutine.
QubesOS/qubes-issues#2622
This also means we don't check if a VM with given name (in case of
VMProperty) exists in the system, at this stage. But this is ok, lets
not duplicate work of property setter.
QubesOS/qubes-issues#2622
If kwargs contains dict as one of values, it isn't hashable and can't be
used as value in frozenset/tuple. Convert such values into
frozenset(dict.items()). Only one (more) level is supported, but it
should be enough.
Solution from http://stackoverflow.com/a/13264725
In theory any call could modify config (through events), but lets keep
writes to qubes.xml low. In any case, qubes.xml will be eventually
written (either at next config-modifying call, or daemon exit).
Sanitization of input value is tricky here, and also very important at
the same time. If property define value type (and it's something more
specific than 'str'), use that. Otherwise allow only printable ASCII
characters, and let appropriate event and setter handle value.
At this point I've reviewed all QubesVM properties in this category and
added appropriate setters where needed.
QubesOS/qubes-issues#2622
Don't allow characters potentially interfering with qrexec. To be on the
safe side, allow only alphanumeric characters + very few selected
punctuations.
Split it into two functions: validate_name - context-less verification,
and actual _setter_name which perform additional verification in
context of actual VM.
Switch to qubes.exc.* exceptions where appropriate.
This reverts commit 0f1672dc63.
Bring it back. Lets not revert the whole feature just because required
package exists only in qubes-builder, not in some online repository.
Also, this revert didn't go as planned - there was a reference to a
'passphrase' local variable, but it wasn't assigned any value.
Cc: @woju
This will allow more flexible API usage, especially when using mgmt API
- we need to use VM type as string there.
We don't lose any flexibility here - VM class names needs to be uniquely
identified by a string (used in qubes.xml) anyway.
Use the right cow image and apply the second layer to provide read-write
access. The correct setup is:
- base image + base cow -> read-only snapshot (base changes "cached"
until committed)
- read-only snapshot + VM cow -> read-write snapshot (changes discarded
after VM shutdown)
This way, even VM without Qubes-specific startup scripts will can
benefit from Template VMs, while VMs with Qubes-specific startup scripts
may still see original root.img content (for possible signature
verification, when storage domain got implemented).
QubesOS/qubes-issues#2256
QubesDB daemon no longer remove socket created by new instance, so one
part of VM restart race condition is solved. The only remaining part is
to ensure that we really connect to the new instance, instead of talking
to the old one (soon to be terminated).
FixesQubesOS/qubes-issues#1694
None of properties set there do any "dangerous thing" for filesystem
permissions (at least for now), so do not require it. This is mostly to
keep compatibility with %post rpm scripts (kernel-qubes-vm at least).
QubesOS/qubes-issues#2412
This tool by design is called as root, so try to:
- switch to normal user if possible
- fix file permissions afterwards - if not
QubesOS/qubes-issues#2412
There was a comment '# Set later', but actually values were never set.
This break adding just installed template (qvm-template-postprocess).
QubesOS/qubes-issues#2412
When system is installed with LVM thin pool, it should be used by
default. But lets keep file-based on for /var/lib/qubes for some corner
cases, migration etc.
QubesOS/qubes-issues#2412
This is intended to call to finish template installation/removal.
Template RPM package is basically container for root.img, nothing more.
Other parts needs to be generated after root.img extraction. Previously
it was open coded in rpm post-install script, but lets keep it as qvm
tool to ease supporting multiple version in template builder
QubesOS/qubes-issues#2412
VM files may be already removed. Don't fail on this while removing a
VM, it's probably the reason why domain is being removed.
qvm-remove tool have its own guard for this, but it isn't enough - if
rmtree(dir_path) fails, storage.remove() would not be called, so
non-file storages would not be cleaned up.
This is also needed to correctly handle template reinstallation - where
VM directory is moved away to call create_on_disk again.
QubesOS/qubes-issues#2412
'-' is invalid character in python identifier, so all the properties
have '_'. But in previous versions qvm-* tools accepted names with '-',
so lets not break this.
QubesOS/qubes-issues#2412
/var/log/qubes directory have setgid set, so all the files will be owned
by qubes group (that's ok), but there is no enforcement of creating it
group writable, which undermine group ownership (logs created by root
would not be writable by normal user)
QubesOS/qubes-issues#2412
In case of LVM (at least), "internal" flag is initialized only when
listing volume attached to given VM, but not when listing them from the
pool. This looks like a limitation (bug?) of pool driver, it looks like
much nicer fix is to handle the flag in qvm-block tool (which list VMs
volumes anyway), than in LVM storage pool driver (which would need to
keep second copy of volumes list - just like file driver).
QubesOS/qubes-issues#2256
There are mutiple cases when snapshots are inconsistently created, for
example:
- "-back" snapshot created from the "new" data, instead of old one
- "-snap" created even when volume.snap_on_start=False
- probably more
Fix this by following volume.snap_on_start and volume.save_on_stop
directly, instead of using abstraction of old volume types.
QubesOS/qubes-issues#2256
Just calling pool.init_volume isn't enough - a lot of code depends on
additional data loaded into vm.storage object. Provide a convenient
wrapper for this.
At the same time, fix loading extra volumes from qubes.xml - don't fail
on volume not mentioned in initial vm.volume_config.
QubesOS/qubes-issues#2256
- add missing lvm remove call when commiting changes
- delay creating volatile image until domain startup (it will be created
then anyway)
- reset cache only when really changed anything
- attach VM to the volume (snapshot) created for its runtime - to not
expose changes (for example in root volume) to child VMs until
shutdown
QubesOS/qubes-issues#2412QubesOS/qubes-issues#2256
The wrapper doesn't do anything else than translating command
parameters, but it's load time is significant (because of python imports
mostly). Since we can't use python lvm API from non-root user anyway,
lets drop the wrapper and call `lvm` directly (or through sudo when
necessary).
This makes VM startup much faster - storage preparation is down from
over 10s to about 3s.
QubesOS/qubes-issues#2256
...instead of manual copy in python. DD is much faster and when used
with `conv=sparse` it will correctly preserve sparse image.
QubesOS/qubes-issues#2256
Set parameters for possibly hiding domain's real IP before attaching
network to it, otherwise we'll have race condition with vif-route-qubes
script.
QubesOS/qubes-issues#1143
This is the IP known to the domain itself and downstream domains. It may
be a different one than seen be its upstream domain.
Related to QubesOS/qubes-issues#1143`
This helps hiding VM IP for anonymous VMs (Whonix) even when some
application leak it. VM will know only some fake IP, which should be set
to something as common as possible.
The feature is mostly implemented at (Proxy)VM side using NAT in
separate network namespace. Core here is only passing arguments to it.
It is designed the way that multiple VMs can use the same IP and still
do not interfere with each other. Even more: it is possible to address
each of them (using their "native" IP), even when multiple of them share
the same "fake" IP.
Original approach (marmarek/old-qubes-core-admin#2) used network script
arguments by appending them to script name, but libxl in Xen >= 4.6
fixed that side effect and it isn't possible anymore. So use QubesDB
instead.
From user POV, this adds 3 "features":
- net/fake-ip - IP address visible in the VM
- net/fake-gateway - default gateway in the VM
- net/fake-netmask - network mask
The feature is enabled if net/fake-ip is set (to some IP address) and is
different than VM native IP. All of those "features" can be set on
template, to affect all of VMs.
Firewall rules etc in (Proxy)VM should still be applied to VM "native"
IP.
FixesQubesOS/qubes-issues#1143
Core3 keep information whether property have default value for all the
properties (not only few like netvm or kernel). Try to use this feature
as much as possible.
When user included/excluded some VMs for restoration, it may be
neceesarry to fix dependencies between them (for example when default
template is no longer going to be restored).
Also fix handling conflicting names.
Now, when file name is also integrity protected (prefixed to the
passphrase), we can make sure that input files are given in the same
order. And are parts of the same VM.
QubesOS/qubes-issues#971
This prevent switching parts of backup of the same VM between different
backups made by the same user (or actually: with the same passphrase).
QubesOS/qubes-issues#971
`openssl dgst` and `openssl enc` used previously poorly handle key
stretching - in case of `openssl enc` encryption key is derived using
single MD5 iteration, without even any salt. This hardly prevent
brute force or even rainbow tables attacks. To make things worse, the
same key is used for encryption and integrity protection which ease
brute force even further.
All this is still about brute force attacks, so when using long, high
entropy passphrase, it should be still relatively safe. But lets do
better.
According to discussion in QubesOS/qubes-issues#971, scrypt algorithm is
a good choice for key stretching (it isn't the best of all existing, but
a good one and widely adopted). At the same time, lets switch away from
`openssl` tool, as it is very limited and apparently not designed for
production use. Use `scrypt` tool, which is very simple and does exactly
what we need - encrypt the data and integrity protect it. Its archive
format have own (simple) header with data required by the `scrypt`
algorithm, including salt. Internally data is encrypted with AES256-CTR
and integrity protected with HMAC-SHA256. For details see:
https://github.com/tarsnap/scrypt/blob/master/FORMAT
This means change of backup format. Mainly:
1. HMAC is stored in scrypt header, so don't use separate file for it.
Instead have data in files with `.enc` extension.
2. For compatibility leave `backup-header` and `backup-header.hmac`. But
`backup-header.hmac` is really scrypt-encrypted version of `backup-header`.
3. For each file, prepend its identifier to the passphrase, to
authenticate filename itself too. Having this we can guard against
reordering archive files within a single backup and across backups. This
identifier is built as:
backup ID (from backup-header)!filename!
For backup-header itself, there is no backup ID (just 'backup-header!').
FixesQubesOS/qubes-issues#971
Have a generic function `handle_streams`, instead of
`wait_backup_feedback` with open coded process names and manual
iteration over them.
No functional change, besides minor logging change.
Use just introduced tar writer to archive content of LVM volumes (or
more generally: block devices). Place them as 'private.img' and
'root.img' files in the backup - just like in old format. This require
support for replacing file name in tar header - another thing trivially
supported with tar writer.
tar can't write archive with _contents_ of block device. We need this to
backup LVM-based disk images. To avoid dumping image to a file first,
create a simple tar archiver just for this purpose.
Python is not the fastest possible technology, it's 3 times slower than
equivalent written in C. But it's much easier to read, much less
error-prone, and still process 1GB image under 1s (CPU time, leaving
along actual disk reads). So, it's acceptable.
Old backup metadata (old qubes.xml) does not contain info about
individual volume sizes. So, extract it from tar header (using verbose
output during restore) and resize volume accordingly.
Without this, restoring volumes larger than default would be impossible.
To ease all this, rework restore workflow: first create QubesVM objects,
and all their files (as for fresh VM), then override them with data
from backup - possibly redirecting some files to new location. This
allows generic code to create LVM volumes and then only restore its
content.
1. Add a helper function on vm.storage. This is equivalent of:
vm.storage.get_pool(vm.volumes[name]).export(vm.volumes[name])
2. Make sure the path returned by `export` on LVM volume is accessible.
First part - handling firewall.xml and rules formatting.
Specification on https://qubes-os.org/doc/vm-interface/
TODO (for dom0):
- plug into QubesVM object
- expose rules in QubesDB (including reloading)
- drop old functions (vm.get_firewall_conf etc)
QubesOS/qubes-issues#1815
Instead of excerpt from /proc/meminfo, use just one integer. This make
qmemman handling much easier and ease implementation for non-Linux OSes
(where /proc/meminfo doesn't exist).
For now keep also support for old format.
FixesQubesOS/qubes-issues#1312
There is no point in changing *public API* for just a change without any
better reason. It turned out most of those settings will be the same in
Qubes 4.0, so keep names the same.
This reverts commit 2d6ad3b60c.
QubesOS/qubes-issues#1812
This is migration of core2 commits:
commit d0ba43f253
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date: Mon Jun 6 02:21:08 2016 +0200
core: start guid as normal user even when VM started by root
Another attempt to avoid permissions-related problems...
QubesOS/qubes-issues#1768
commit 89d002a031
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date: Mon Jun 6 02:19:51 2016 +0200
core: use runuser instead of sudo for switching root->user
There are problems with using sudo in early system startup
(systemd-logind not running yet, pam_systemd timeouts). Since we don't
need full session here, runuser is good enough (even better: faster).
commit 2265fd3d52
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date: Sat Jun 4 17:42:24 2016 +0200
core: start qubesdb as normal user, even when VM is started by root
On VM start, old qubesdb-daemon is terminated (if still running). In
practice it happen only at VM startart (shutdown and quickly start
again). But in that case, if the VM was started by root, such operation
would fail.
So when VM is started by root, make sure that qubesdb-daemon will be
running as normal user (the first user in group 'qubes' - there should
be only one).
FixesQubesOS/qubes-issues#1745
Commit from core2:
commit 94d52a13e7
core: adjust guid parameters when running on KDE5
On KDE5 native decoration plugin is used and requires special properties
set (instead of `_QUBES_VMNAME` etc).
Special care needs to be taken when detecting environment, because
environment variables aren't good enough - this script may be running
with cleared environment (through sudo, or from systemd). So check
properties of X11 root window.
QubesOS/qubes-issues#1784
* core3-devices:
Fix core2migration and tests for new devices API
tests: more qubes.devices tests
qubes/ext/pci: implement pci-no-strict-reset/BDF feature
qubes/tools: allow calling qvm-device as qvm-devclass (like qvm-pci)
qubes: make pylint happy
qubes/tools: add qvm-device tool (and tests)
tests: load qubes.tests.tools.qvm_ls
tests: PCI devices tests
tests: add context manager to catch stdout
qubes/ext/pci: move PCI devices handling to an extension
qubes/devices: use more detailed exceptions than just KeyError
qubes/devices: allow non-persistent attach
qubes/storage: misc fixes for VM-exposed block devices handling
qubes: new devices API
FixesQubesOS/qubes-issues#2257
Instead of old per-VM flag 'pci_strictreset', now implement this as
per-device flag using features. To not fail on particular device
assignment set 'pci-no-strict-reset/DEVICE-BDF' to True. For
example 'pci-no-strict-reset/00:1b.0'.
QubesOS/qubes-issues#2257
Implement required event handlers according to documentation in
qubes.devices.
A modification of qubes.devices.DeviceInfo is needed to allow dynamic,
read-only properties.
QubesOS/qubes-issues#2257
Add 'backenddomain' element when source (not target) domain is not dom0.
Fix XML elemenet name. Actually set volume.domain when listing
VM-exposed devices.
QubesOS/qubes-issues#2256
Allow device plugin to list attached and available devices. Enforce
at API level every device being exposed by some domain.
This commit only changes devices API, but not update existing users
(pci) yet.
QubesOS/qubes-issues#2257
This script should run as fast as possible, so avoid importing large
module. In fact the only used thing was argparse wrapper, so switch to
the standard one and drop aliases.
QubesOS/qubes-issues#2256
Some tests do not apply, as there is no savefile and attributes
propagation is much simpler. Dropped tests:
- test_000_firewall_propagation
- test_001_firewall_propagation
- test_000_prepare_dvm
QubesOS/qubes-issues#2253
- fix assigning 'template' property - do not do it if VM already have it
set
- cap default maxmem at 4000, as we clamp it to 10*memory anyway (and
default memory is 400)
- DispVM is no longer a special case for storage
- Add missing 'rw=True' for volatile volume
- Handle storage initialization (copy&paste from AppVM)
- Clone properties from DispVM template
QubesOS/qubes-issues#2253
It is useful on some cases to prevent talking to hypervisor.
Warning - it may have sense only when action do not access any runtime
VM status. For example running the domain will fail, but changing its
properties should work.
Do not access vm.libvirt_domain after it being already removed - this
will redefine it again in libvirt, just to undefine it in a moment.
On the other hand, few lines below there is fallback libvirt cleanup, in
case of proper one not working.
Since "qubes: fix event framework", handlers from extensions looks the
same as from the VM class itself, so it isn't possible to order them
correctly. Specification says:
For each class first are called bound handlers (specified in class
definition), then handlers from extensions. Aside from above,
remaining order is undefined.
So, restore this property, which is later correctly used to order
handlers.
When properly set, applications will have a chance to automatically
detect HiDPI and act accordingly. This is the case for Fedora 23
template and GNOME apps (maybe even all built on top of GTK).
But for privacy reasons, don't provide real values, only some
approximate one. Give enough information to distinguish DPI above 150,
200 and 300. This is some compromise between privacy and HiDPI support.
QubesOS/qubes-issues#1951
This commit is migrated from gui-daemon repository
(dec462795d14a336bf27cc46948bbd592c307401).
Now failure to load external tests shows in which entry point the error
happened and a useful traceback. The traceback extends from the "try"
statement down to the actual error line, but it does not include the
frames above, ie. from the invocation to the load_tests routine. This is
a limitation of Python itself and usually not a problem.
This directory is not only for disk images (in fact disk images may be
elsewhere depending on choosen volume pool), so it would be cleaner to
handle (create/remove) it directly in QubesVM class.
1. It is unclear yet whether dispvm_netvm will be implemented in core3, but
probably not.
2. Remove tests for setting memory/cpu above host resouces - rejecting
those values at property set time would break backup restore on some
machines (when migrating from bigger to smaller system).
In some places full volume object was called, in others - just file
path. Since this function is also used in some volume init/teardown, use
path everywhere.
To successfully load all the data, proceed in order:
- set app.default_kernel
- load all templates
- set app.default_template
- load other VMs
- update network dependencies between VMs
- set other global properties
Apparently the most important (the only?) property required in offline
mode is "is_running". So let's patch it to return False and make sure
any other libvirt usage would result in failure.
Or maybe better simply returh False in vm.is_running, when libvirt
connection fails? But then it would not be possible to use offline mode
and have (some, probably unrelated) libvirtd running at the same time.
FixesQubesOS/qubes-issues#2008
Check directly vm.template, throwing AttributeError when not found.
There may be some value in converting it to more descriptive error, but
since that's mostly for internal users (not user facing actions) don't
bother for now.
QubesOS/qubes-issues#1842
It may make sense to create 'snapshot' volume out of already 'snapshot',
not only 'origin'. In pracice it will exactly the same as 'snapshot
connected directly to 'origin'.
QubesOS/qubes-issues#866
- Use full import paths in qvm-pool
- Add, Remove, Info and List options set `Namespace.command`. This fixes a crash
when `-o dir_path=/mnt/foo` is specified after `-a foo xen`.
- Remove `_List`
- Remove 'added pool' and 'removed pool' messages. Unix tools are quiet
- qvm-pool call app.save()
- Rename create_parser in get_parser
- Rename local_parser variables to just parser
- qvm-pool uses print_table
- Remove old qvm-remove
- Remove a log line from Storage, because it prints confusing lines, like:
Removing volume kernel: /var/lib/qubes/vm-kernels/4.1.13-6/modules.img
Add the ability to handle commands having subcommands, like `qvm-block`
Split the ArgumentCheckVisitor in an OptionsCheckVisitor &
SubCommandCheckVisitor. The OptionsCheckVisitor checks options given
in a section named 'Options' (case insensitive), while the
SubCommandCheckVisitor dispatches on a section named 'Commands' (case
insensitive).
This also fixes finding the undocumented command arguments. The previous
solution with depart_document did not work. NodeVisitor does not dispatch to
depart_document() even if it's mentioned in the documentation.
This commit eliminates import statements happening in the middle of the
file (between two classes definition). The cycles are still there. The
only magic module is qubes itself.
This is squashed woju/qubes-core-admin#8 by @kalkin
- LinuxKernel.volumes() lists all available kernels
- LinuxKernel use kernel version as vid
- LinuxKernel add docstrings
- Linux.kernel use os.listdir instead of os.walk
- LinuxKernel dynamically list available kernels
- Remove all *_dev_config methods
- Checks if a storage image exists moved to XenPool
- Storage.remove wraps Pool.remove()
- Stop volumes on domain sutdown/kill
- Warn when using deprecated methods
This makes typo errors much easier to find (also using pylint or so).
While at it, also removed explicit appmenus backup, as it should be
provided by appmenus extension.
Handle them the same way - individual files, not the whole directory for
templates.
Also don't backup obsolete 'kernels' subdir - it isn't supported in
core3.
- Don't use catch-all except statement.
- Use str.format instead of "%" operator.
- Use static methods where applicable.
- Remove unused local variables.
- Don't shadow variables from outer scope
By default AppVM is template based. This means vm.root_img points at
default template's root image. Change this to StandaloneVM to have
independent root.img.
Introduce two main classes Backup and BackupRestore for storing the
state of the desired operation. Then a simple interface to adjust
parameters.
(Almost) no functional change.
QubesOS/qubes-issues#1213QubesOS/qubes-issues#1214
This allows the user to start VM based on "old" system (from R3.x) in
R4.0. For example after restoring from backup, or migration. This also
makes upgrade instruction much easier - no need complex recovery
instruction if one upgrade dom0 before upgrading all the templates.
QubesOS/qubes-issues#1812
- introduce 'firewall-changed' event
- add reload_firewall_for_vm stub function
Should that function be private, called only from appropriate event
handlers?
QubesOS/qubes-issues#1815
It may be (and by default is) path relative to VM directory.
This code will be gone in the final version, after merging firewall
configuration into qubes.xml. But for now have something testable.
It is much less error-prone way. Previous approach didn't worked because
VMs weren't added here at 'domain-init'/'domain-loaded' event. And even
after adding such handlers it wasn't working because of
QubesOS/qubes-issues#1816.
It may be a little slower, but since it isn't used so often
(starting/stopping VM and reloading firewall), shouldn't be a problem.
This will easily end up in infinite recursion. For example
'sys-net'.template points at 'fedora-23', which itself has
'fedora-23'.netvm set to 'sys-net'.
This way even separate processes (even those started not directly - like
qrexec service calls) will use correct qubes.xml file.
FixesQubesOS/qubes-issues#1730