Currently there is just one anti-spoofing firewall rule ensuring packets
coming through vif+ interfaces have the right source address. Add
another rule ensuring that addresses that belongs to VMs behind those
vif+ interface do not appear on other interfaces (specifically eth0, but
also physical ones).
Normally it wouldn't be an issue because of rp_filter (doing the same
based on route table), default DROP in FORWARD chain and also conntrack
(the need to guess exact port numbers and sequence numbers). But it
appears all three mechanisms are ineffective in some cases:
- rp_filter in many distributions (including Fedora and Debian) was
switched to Loose Mode, which doesn't verify exact interface
- there is a rule in FORWARD table allowing established connections and
conntrack does not keep track of input/output interfaces
- CVE-2019-14899 allows to guess all the data needed to inject packets
Reported-by: Demi M. Obenour <demiobenour@gmail.com>
Previously enabling the interface was the first action in the setup
steps. Linux theoretically do not forward the traffic until proper
IP address and route is added to the interface (depending on rp_filter
setting). But instead of relying on this opaque behavior better setup
anti-spoofing rules earlier. Also, add 'set -o pipefail' for more
reliable error handling.
Note the rules for actual VM traffic (qvm-firewall) are properly
enforced - until those rules are loaded, traffic from appropriate vif
interface is blocked. But this relies on proper source IP address,
anti-spoofing rules need to be setup race-free.
Reported-by: Demi M. Obenour <demiobenour@gmail.com>
* origin/pr/239:
xendriverdomain: remove placeholder for sbinpath
Fix regex in qubes-fix-nm-conf.sh
Update travis
xendriverdomain: remove Requires and After proc-xen.mount
Drop legacy xen entry in fstab
qubes-firewall will now blacklist IP addresses from all connected
machines on non-vif* interfaces. This prevents spoofing source or
target address on packets going over an upstream link, even if
a VM in question is powered off at the moment.
Depends on QubesOS/qubes-core-admin#303 which makes admin maintain
the list of IPs in qubesdb.
FixesQubesOS/qubes-issues#5540.
Detect if IPv6 is disabled in the kernel (like it is in Whonix Gateway)
and skip setting IPv6 in that case. Otherwise 'ip' call would fail and
since the script is with 'set -e', it would interrupt setting IPv4 too.
Log error message in that case anyway.
FixesQubesOS/qubes-issues#5110
Example:
/rw/config/network-hooks.d/test.sh
\#!/bin/bash
command="$1"
vif="$2"
ip="$3"
if [ "$ip" == '10.137.0.100' ]; then
case "$command" in
online)
ip route add 192.168.0.100 via 10.137.0.100
;;
offline)
ip route del 192.168.0.100
;;
esac
fi
In rare cases when vif-route-qubes is called simultaneously with some
other iptables-restore instance, it fails because of missing --wait (and
recent iptables-restore defaults to aborting instead of waiting
for lock). That other call may be from qubes-firewall or user script.
Related to QubesOS/qubes-issues#3665
If IPv6 gateway address provided by dom0 isn't a link local address, add
a /128 route to it. Also, add this address on backend interfaces (vif*).
This is to allow proper ICMP host unreachable packets forwarding - if
gateway (address on vif* interface) have only fe80: address, it will be
used as a source for ICMP reply. It will be properly delivered to the VM
directly connected there (for example from sys-net to sys-firewall), but
because of being link-local address, it will not be forwarded any
further.
This results timeouts if host doesn't have IPv6 connectivity.
NetworkManager reports a bunch of events, reloading DNS at each of them
doesn't make sense and is harmful - systemd have ratelimit on service
restart.
FixesQubesOS/qubes-issues#3135
If IPv6 is configured in the VM, and it is providing network to others,
apply IPv6 firewall similar to the IPv4 one (including NAT for outgoing
traffix), instead of blocking everything. Also, enable IP forwarding for
IPv6 in such a case.
FixesQubesOS/qubes-issues#718
If dom0 expose IPv6 address settings, configure it on the interface.
Both backend and frontend side. If no IPv6 configuration is provided,
block IPv6 as it was before.
FixesQubesOS/qubes-issues#718
When qubes-firewall service is started, modify firewall to have "DROP"
policy, so if something goes wrong, no data got leaked.
But keep default action "ACCEPT" in case of legitimate service stop, or
not starting it at all - because one may choose to not use this service
at all.
Achieve this by adding "DROP" rule at the end of QBS-FIREWALL chain and
keep it there while qubes-firewall service is running.
FixesQubesOS/qubes-issues#3269
New udev have `DRIVERS` matcher, instead of `ENV{ID_NET_DRIVER}`. Add
appropriate rule to the file. Without it, network was working
incidentally, because there is a fallback in qubes-misc-post.service,
but dynamic network change was broken.
This applies at least to Debian stretch.
FixesQubesOS/qubes-issues#3192
Explicitly block something like "curl http://127.0.0.1:8082" and
return error page in this case. This error page is used in Whonix to
detect if the proxy is torrified. If not blocked, it may happen that
empty response is returned instead of error. See linked ticket for
details.
This was previously done for 10.137.255.254, but since migration to
qrexec-based connection, 127.0.0.1 is used instead.
FixesQubesOS/qubes-issues#1482
Configure package manager to use 127.0.0.1:8082 as proxy instead of
"magic" IP intercepted later. The listen on this port and whenever
new connection arrives, spawn qubes.UpdatesProxy service call (to
default target domain - subject to configuration in dom0) and connect
its stdin/out to the local TCP connection. This part use systemd.socket
unit in case of systemd, and ncat --exec otherwise.
On the other end - in target domain - simply pass stdin/out to updates
proxy (tinyproxy) running locally.
It's important to _not_ configure the same VM to both be updates proxy and
use it. In practice such configuration makes little sense - if VM can
access network (which is required to run updates proxy), package manager
can use it directly. Even if this network access is through some
VPN/Tor. If a single VM would be configured as both proxy provider and
proxy user, connection would loop back to itself. Because of this, proxy
connection redirection (to qrexec service) is disabled when the same VM
also run updates proxy.
FixesQubesOS/qubes-issues#1854