# QubesOS Port Forwarding GSoC 2021 ## Proposal text ### Introduction Forwarding ports to Qubes VM is currently possible only though a multi step, error prone, manual process that also requires writing custom configuration in order to be persistent between reboots. Things as simple as starting a webserver or netcat for LAN file sharing canbe eventually a troublesome and time-wasting process[1][2]. Furthermore, applications thatrely on NAT traversal protocols such as those for audio and video communications do not workin direct P2P mode with STUN and always use TURN instead[3]. ### Project Goals Implement a GUI for automatic and persistent, eventually with a predefined timespan (ie: untilreboot), port forwarding. The idea is to split horizontally the "Firewall Rules" tab in the"Qubes Settings" window and add another area below it. It is aloready possible to forward TCP streams, however there is no GUI nor a clear dashboard and furthermore its versatility is limited. In addition, discuss and verify the possibility to implement a secure NAT traversal systemand GUI. A basic proposal could be a checkbox to enable NAT traversal requests. When thecheckbox is selected, the FirwallVM will redirect NAT traversal requests to a local pythondaemon or a dedicated VM that will negotiate the NAT traversal and configure the networkaccordingly. In this case, prompt the user in Dom0 about the NAT traversal request. Of coursethe qvm-* set of tools must e able to achieve the same tasks via CLI. ### Implementation First develop and document the part related to manual port forwarding since it is both a more frequent use case and is less complicated. Depending on the problems encountered, evaluate the feasibility of secure NAT traversal. #### Notes 1. https://github.com/QubesOS/qubes-issues/issues/3556 2. https://www.reddit.com/r/Qubes/comments/8cb57i/how_to_achieve_qube_to_qube_communication_port/ 3. https://github.com/QubesOS/qubes-issues/issues/6225 4. https://github.com/QubesOS/qubes-issues/issues/5031 5. https://gist.github.com/fepitre/941d7161ae1150d90e15f778027e3248 ## Development ### Background * https://www.qubes-os.org/doc/admin-api/ * https://www.qubes-os.org/doc/vm-interface/#firewall-rules-in-4x * https://www.qubes-os.org/doc/firewall/ * https://www.qubes-os.org/doc/config-files/ ### Dev Repositories * https://github.com/lsd-cat/qubes-core-admin * https://github.com/lsd-cat/qubes-core-admin-client * https://github.com/lsd-cat/qubes-core-agent-linux All changes are in the `gsoc-port-forwarding` branch of each repo. ### Main components involved 1. [Firewall GUI in "Settings" (qubes-manager)](https://github.com/QubesOS/qubes-manager/blob/master/qubesmanager/firewall.py) 2. [CLI interface available via `qvm-firewall` (core-admin-client)](https://github.com/QubesOS/qubes-core-admin-client/blob/master/qubesadmin/tools/qvm_firewall.py) 3. [Actual client logic for the Admin API (core-admin-client)](https://github.com/QubesOS/qubes-core-admin-client/blob/master/qubesadmin/firewall.py) 4. [Admin API interface - XML conf manager (core-admin)](https://github.com/QubesOS/qubes-core-admin/blob/master/qubes/firewall.py) 5. [Agent running in firewall vm - executes `nft` or `iptables`](https://github.com/QubesOS/qubes-core-agent-linux/blob/master/qubesagent/firewall.py) ### Current Status #### How does the GUI and `qvm-firewall` configuration work? The Qubes Manager GUI and the `qvm-firewall` both use the code imlemented in the Admi API Client library. The Client Library sends specific messages to the `qubesd` daemon. The currently supported operatins are: * `admin.vm.firewall.Get` * `admin.vm.firewall.Set` * `admin.vm.firewall.Reload` These actions can be tested by using the `qvm-firewall` utility. It is important to note that both the client and the daemon are more flexibile compared to the settings available via the GUI. ##### Configuration files If any non default configuration is set by the user, an AppVM will have a `firewall.xml` configuration file in the `var/lib/qubes//` path. Deleting the file will reset the firewall to the default state and any customization will be lost. The `firewall.xml` is clearly human readable and contains rules in the form: ``` accept lsd.cat tcp 443 accept 10.132.11.1/24 accept dns drop ``` ##### Commands The following command will return the firewall rules for ``. ``` qvm-firewall ``` As can be seen, the output will show more colums that the GUI, specifically an `EXPIRE`, `COMMENT`, and `SPECIAL TARGET` columns will be displayed. The following command will reload the persistent rules stored in `firewall.xml` of `` ``` qvm-firewall --reload ``` The following command can be used to add a rule. Not that if the GUI detects that the firewall has been edited from CLI, since it does not support all CLI settings, it will refuse to allow management again from the GUI. ``` qvm-firewall add action=accept dsthost=1.1.1.1 proto=tcp dstports=80-80 expire=+5000 comment="cloudflare http test rule" ``` ### Proposal Currently, all firewall rules have an `action` properties which can be either `accept` or `drop`. The plan is to add a third option `forward` specifically for implementing automatic port forwarding. Such options must be supported both in the configuration file and in the Admin API (client-server). Lastly, it shall be implemented in the agent daemon. The main issue however is the fact that currenly, the firewall client library is designated to operate only on the AppVM configured Firewall NetVM. However, in order to forward ports from the outside world, specific rules needs to be applied to the Firewall NetVM Networking NetVM. (ie: both is `sys-firewall` and `sys-net`, as currently done for manual port forwarding). ### action=forward Since in the case of port forwarding the target ip address would always be the `` IP address, users should not be asked for a `dsthost` field. Adding a forward rule could look like this: ``` qvm-firewall add action=forward proto=tcp forwardtype=external srcports=443-443 dstports=80443-80443 srchost=0.0.0.0/0 expire=+500000 comment="example https server rule" qvm-firewall add action=forward proto=tcp forwardtype=internal srcports=80-80 dstports=8000-8000 srchost=10.137.0.13 expire=+500000 comment="example internal simplehttpserver file sharing rule" ``` Of course `expire=` and `comment=` are optional fields. ``` forward tcp external 443-443 80443-80443 0.0.0.0/0 example https server rule ```Sorry. We’re having trouble getting your pages back. We are having trouble restoring your last browsing session. Select Restore Session to try again. ### Proposal chart ###### The main distinction between internal and external port forwarding is: * _Internal_ resolves only ``'s `` * _External_ recursively resolves all upstream networking vm and sets forwarding rules on all of them ###### This should cover multiple scenarios: * _Standard external forwarding_ when `` service needs to be exposed on a physical interface * _Standard internal forwarding_ when `` service needs to be exposed to other `` connected to the same `` * _VPN internal port forwarding_ when `` service needs to be exposed through a VPN It is important to note that in the last case, it is just a standard case of internal forwarding. ![Implementation](https://git.lsd.cat/Qubes/gsoc/raw/master/assets/implementation.png) ### Implementation Roadmap 1. ✔️ In `core-admin-client/qubesadmin/firewall.py` firewall.py > The code needs to support the new options for the rule (action=forward frowardtype= srcports=443-443 srchosts=0.0.0.0/0 2. ✔️ In `core-admin/qubes/firewall.py` -> The code needs to support the same options as the point above 3. ✔️ In `core-admin/qubes/vm/mix/net.py` -> The most important logic goes here. Here there is the need to resolve the full network chain for external port forwarding. From here it is possible to add the respective rules to the QubesDB of each NetVM in he chain and trigger a reload event. 4. ✔️ In `core-agent-linux/qubesagent/firewall.py` -> Here goes the logic for building the correct syntax for iptables or nft and the actual execution 5. ❌ Tests 6. ❌ GUI Both tests and GUI have yet to be worked on. Automated tests will be written soon in the following weeks. ### Required rules #### External The iptables backend in the firewall worker is being deprecated. If the `nft` binary is available on the target Qubes, iptables will be never involved. Thus, only `nft` rules are relevant in this context. Sample setup: ``` sys-net - 10.137.0.5 (ens6 phy with 192.168.10.20) sys-firewall - 10.137.0.6 personal - 10.137.0.7 ``` All of them are running fedora-32. And assume the following rule added via qvm-firewall: ```# qvm-firewall personal add action=forward forwardtype=external scrports=22-22 proto=tcp dstports=2222-2222 srchost=192.168.10.0/24 ``` First, a table for the forwarding rules is created: ``` flush chain {family} qubes-firewall-forward prerouting flush chain {family} qubes-firewall-forward postrouting table {family} qubes-firewall-forward { chain postrouting { type nat hook postrouting priority srcnat; policy accept; masquerade } chain prerouting { type nat hook prerouting priority dstnat; policy accept; } } ``` Then, if the qube is marked as 'last', meaning that it is the external qube with the physical interface the following rules are added: ``` table {family} qubes-firewall-forward { chain prerouting { meta iifname "ens6" {family} saddr 192.168.10.0/24 tcp dport {{ 22 }} dnat to 10.137.0.6:2222 } } table {family} qubes-firewall { chain forward { meta iifname "eth0" {family} daddr 10.137.0.6 tcp dport 2222 ct state new counter accept } } ``` And that is all for sys-net. In sys-firewall, since it is an 'internal' qube, the following rules are added instead: ``` table {family} qubes-firewall-forward { chain prerouting { meta iifname "eth0" {family} saddr 120.137.0.5 tcp dport {{ 2222 }} dnat to 10.137.0.7:2222 } } table {family} qubes-firewall { chain forward { meta iifname "eth0" {family} daddr 10.137.0.7 tcp dport 2222 ct state new counter accept } } ``` Lastly, some rules need to be added in the target Qube in order to accept the incoming connections. Since the target Qube does not have a running firewall worker, the method for doing this has yet to be determined. ## Extra ### QubesDB Debugging Since all firewall rules are written to the respective domains QubesDB by the `qubesd` it is essential dor debugging purposes to be able to easily read QubesDB entries. The QubesOS Project provide some useful utilities to interact with each DB. Such utilities have self explicative names and works like the respective functions used in the source code. The most useful are: * `qubesdb-list` * `qubesdb-read` * `qubesdb-write` Useful example: ``` # qubesdb-list -fr /qubes-firewall/ -d sys-firewall # qubesdb-read -fr /qubes-firewall/10.137.0.2/0001 -d sys-firewall ``` ### Flags Flags explanation as produced from the `qvm-ls` utility: ``` Type of domain (When it is HVM, the letter is capital). 0 AdminVM (AKA Dom0) aA AppVM dD DisposableVM sS StandaloneVM tT TemplateVM Current power state. r running t transient p paused s suspended h halting d dying c crashed ? unknown Extra U updateable N provides_network R installed_by_rpm i internal D debug A autostart ``` ### Dev Environment Currently developing on VMWare Workstation on Windows due to issues in virtualizing on linux on my home hardware. QubesOS is virtualized behind NAT and can reach Windows Host via SSH. In order to the the code, I wrote some [helper scripts](https://git.lsd.cat/Qubes/tools). The required setup involves: * Clone the tools on the Windows Host * Generate an SSH keypair on `sys-net` * Add `sys-net` SSH pubkey on Windows for non interactive authentication (`sshd` is easier using Windows Subsystem for Linux) * Via scp/sftp, copy all the bash script in the `sys-net` VM. Leve `pull.sh` in `/home/user/pull.sh` * Using `qvm-run` copy `backup.sh`, `restore.sh` and `updte.sh` in `Dom0` * First, run once `backup.sh` and pay attention to never run it again in order to recover from broken states (breaking qubesd, `qvm-run` will stop working and it will be hard to recover) * Run `update.sh` to automatically pull changes from the Windows host. `qubesd` is restarted within the same script. * In case of issues, run `restore.sh` and investigate the previous errors