Previously, network uplink (eth0) was configured in two places:
- udev (asynchronously)
- qubes-misc-post.service - at the very end of the boot process
This caused multiple issues:
1. Depending on udev event processing (non-deterministic), network
uplink could be enabled too early, for example before setting up
firewall.
2. Again depending on udev processing, it can be enabled quite late in
the boot process, after network.target is up and services assume
network already configured. This for example causes qubes-firewall to
fail DNS queries.
3. If udev happen try to enable enable networking even earlier, it may
happend before qubesdb-daemon is started, in which case network setup
fill fail. For this case, there was network re-setup in
qubes-misc-post service - much later in the boot.
Fix the above by placing network uplink setup in a dedicated
qubes-network-uplink@${INTERFACE}.service unit ordered after
network-pre.target and pulled in by udev based on vif device existence,
to handle also dynamic network attach/detach.
Then, create qubes-network-uplink.service unit waiting for appropriate
interface-specific unit (if one is expected!) and order it before
network.target.
QubesOS/qubes-issues#5576
Since AppVMs will have their own NetVM-facing neighbor entries, a user
might (correctly) conclude that NetVMs do not need ARP or NDP enabled.
For this to work with NAT namespaces, they need their own neighbor
entries.
Currently there is just one anti-spoofing firewall rule ensuring packets
coming through vif+ interfaces have the right source address. Add
another rule ensuring that addresses that belongs to VMs behind those
vif+ interface do not appear on other interfaces (specifically eth0, but
also physical ones).
Normally it wouldn't be an issue because of rp_filter (doing the same
based on route table), default DROP in FORWARD chain and also conntrack
(the need to guess exact port numbers and sequence numbers). But it
appears all three mechanisms are ineffective in some cases:
- rp_filter in many distributions (including Fedora and Debian) was
switched to Loose Mode, which doesn't verify exact interface
- there is a rule in FORWARD table allowing established connections and
conntrack does not keep track of input/output interfaces
- CVE-2019-14899 allows to guess all the data needed to inject packets
Reported-by: Demi M. Obenour <demiobenour@gmail.com>
Previously enabling the interface was the first action in the setup
steps. Linux theoretically do not forward the traffic until proper
IP address and route is added to the interface (depending on rp_filter
setting). But instead of relying on this opaque behavior better setup
anti-spoofing rules earlier. Also, add 'set -o pipefail' for more
reliable error handling.
Note the rules for actual VM traffic (qvm-firewall) are properly
enforced - until those rules are loaded, traffic from appropriate vif
interface is blocked. But this relies on proper source IP address,
anti-spoofing rules need to be setup race-free.
Reported-by: Demi M. Obenour <demiobenour@gmail.com>
* origin/pr/239:
xendriverdomain: remove placeholder for sbinpath
Fix regex in qubes-fix-nm-conf.sh
Update travis
xendriverdomain: remove Requires and After proc-xen.mount
Drop legacy xen entry in fstab
qubes-firewall will now blacklist IP addresses from all connected
machines on non-vif* interfaces. This prevents spoofing source or
target address on packets going over an upstream link, even if
a VM in question is powered off at the moment.
Depends on QubesOS/qubes-core-admin#303 which makes admin maintain
the list of IPs in qubesdb.
FixesQubesOS/qubes-issues#5540.
Detect if IPv6 is disabled in the kernel (like it is in Whonix Gateway)
and skip setting IPv6 in that case. Otherwise 'ip' call would fail and
since the script is with 'set -e', it would interrupt setting IPv4 too.
Log error message in that case anyway.
FixesQubesOS/qubes-issues#5110
Example:
/rw/config/network-hooks.d/test.sh
\#!/bin/bash
command="$1"
vif="$2"
ip="$3"
if [ "$ip" == '10.137.0.100' ]; then
case "$command" in
online)
ip route add 192.168.0.100 via 10.137.0.100
;;
offline)
ip route del 192.168.0.100
;;
esac
fi
In rare cases when vif-route-qubes is called simultaneously with some
other iptables-restore instance, it fails because of missing --wait (and
recent iptables-restore defaults to aborting instead of waiting
for lock). That other call may be from qubes-firewall or user script.
Related to QubesOS/qubes-issues#3665
If IPv6 gateway address provided by dom0 isn't a link local address, add
a /128 route to it. Also, add this address on backend interfaces (vif*).
This is to allow proper ICMP host unreachable packets forwarding - if
gateway (address on vif* interface) have only fe80: address, it will be
used as a source for ICMP reply. It will be properly delivered to the VM
directly connected there (for example from sys-net to sys-firewall), but
because of being link-local address, it will not be forwarded any
further.
This results timeouts if host doesn't have IPv6 connectivity.
NetworkManager reports a bunch of events, reloading DNS at each of them
doesn't make sense and is harmful - systemd have ratelimit on service
restart.
FixesQubesOS/qubes-issues#3135