2016-03-02 12:17:29 +01:00
|
|
|
<domain type="xen">
|
2016-10-04 11:30:29 +02:00
|
|
|
{% block basic %}
|
2017-06-01 03:49:57 +02:00
|
|
|
<name>{{ vm.name }}</name>
|
2016-10-04 11:30:29 +02:00
|
|
|
<uuid>{{ vm.uuid }}</uuid>
|
Use maxmem=0 to disable qmemman, add more automation to it
Use maxmem=0 for disabling dynamic memory balance, instead of cryptic
service.meminfo-writer feature. Under the hood, meminfo-writer service
is also set based on maxmem property (directly in qubesdb, not
vm.features dict).
Having this as a property (not "feature"), allow to have sensible
handling of default value. Specifically, disable it automatically if
otherwise it would crash a VM. This is the case for:
- domain with PCI devices (PoD is not supported by Xen then)
- domain without balloon driver and/or meminfo-writer service
The check for the latter is heuristic (assume presence of 'qrexec' also
can indicate balloon driver support), but it is true for currently
supported systems.
This also allows more reliable control of libvirt config: do not set
memory != maxmem, unless qmemman is enabled.
memory != maxmem only makes sense if qmemman for given domain is
enabled. Besides wasting some domain resources for extra page tables
etc, for HVM domains this is harmful, because maxmem-memory difference
is made of Popupate-on-Demand pool, which - when depleted - will kill
the domain. This means domain without balloon driver will die as soon
as will try to use more than initial memory - but without balloon driver
it sees maxmem memory and doesn't know about the lower limit.
Fixes QubesOS/qubes-issues#4135
2018-11-03 05:13:23 +01:00
|
|
|
{% if ((vm.virt_mode == 'hvm' and vm.devices['pci'].persistent() | list)
|
|
|
|
or vm.maxmem == 0) -%}
|
2017-06-15 00:21:13 +02:00
|
|
|
<memory unit="MiB">{{ vm.memory }}</memory>
|
Use maxmem=0 to disable qmemman, add more automation to it
Use maxmem=0 for disabling dynamic memory balance, instead of cryptic
service.meminfo-writer feature. Under the hood, meminfo-writer service
is also set based on maxmem property (directly in qubesdb, not
vm.features dict).
Having this as a property (not "feature"), allow to have sensible
handling of default value. Specifically, disable it automatically if
otherwise it would crash a VM. This is the case for:
- domain with PCI devices (PoD is not supported by Xen then)
- domain without balloon driver and/or meminfo-writer service
The check for the latter is heuristic (assume presence of 'qrexec' also
can indicate balloon driver support), but it is true for currently
supported systems.
This also allows more reliable control of libvirt config: do not set
memory != maxmem, unless qmemman is enabled.
memory != maxmem only makes sense if qmemman for given domain is
enabled. Besides wasting some domain resources for extra page tables
etc, for HVM domains this is harmful, because maxmem-memory difference
is made of Popupate-on-Demand pool, which - when depleted - will kill
the domain. This means domain without balloon driver will die as soon
as will try to use more than initial memory - but without balloon driver
it sees maxmem memory and doesn't know about the lower limit.
Fixes QubesOS/qubes-issues#4135
2018-11-03 05:13:23 +01:00
|
|
|
{% else -%}
|
2017-06-15 00:21:13 +02:00
|
|
|
<memory unit="MiB">{{ vm.maxmem }}</memory>
|
Use maxmem=0 to disable qmemman, add more automation to it
Use maxmem=0 for disabling dynamic memory balance, instead of cryptic
service.meminfo-writer feature. Under the hood, meminfo-writer service
is also set based on maxmem property (directly in qubesdb, not
vm.features dict).
Having this as a property (not "feature"), allow to have sensible
handling of default value. Specifically, disable it automatically if
otherwise it would crash a VM. This is the case for:
- domain with PCI devices (PoD is not supported by Xen then)
- domain without balloon driver and/or meminfo-writer service
The check for the latter is heuristic (assume presence of 'qrexec' also
can indicate balloon driver support), but it is true for currently
supported systems.
This also allows more reliable control of libvirt config: do not set
memory != maxmem, unless qmemman is enabled.
memory != maxmem only makes sense if qmemman for given domain is
enabled. Besides wasting some domain resources for extra page tables
etc, for HVM domains this is harmful, because maxmem-memory difference
is made of Popupate-on-Demand pool, which - when depleted - will kill
the domain. This means domain without balloon driver will die as soon
as will try to use more than initial memory - but without balloon driver
it sees maxmem memory and doesn't know about the lower limit.
Fixes QubesOS/qubes-issues#4135
2018-11-03 05:13:23 +01:00
|
|
|
{% endif -%}
|
2016-10-04 11:30:29 +02:00
|
|
|
<currentMemory unit="MiB">{{ vm.memory }}</currentMemory>
|
|
|
|
<vcpu placement="static">{{ vm.vcpus }}</vcpu>
|
|
|
|
{% endblock %}
|
2017-07-03 23:25:43 +02:00
|
|
|
{% block cpu %}
|
2017-10-02 22:23:27 +02:00
|
|
|
{% if vm.virt_mode != 'pv' %}
|
2017-07-03 23:25:43 +02:00
|
|
|
<cpu mode='host-passthrough'>
|
|
|
|
<!-- disable nested HVM -->
|
|
|
|
<feature name='vmx' policy='disable'/>
|
|
|
|
<feature name='svm' policy='disable'/>
|
|
|
|
<!-- disable SMAP inside VM, because of Linux bug -->
|
|
|
|
<feature name='smap' policy='disable'/>
|
2020-06-10 05:47:53 +02:00
|
|
|
{% if vm.app.host.cpu_family_model in [(6, 58), (6, 62)] -%}
|
|
|
|
<feature name='rdrand' policy='disable'/>
|
|
|
|
{% endif -%}
|
2017-07-03 23:25:43 +02:00
|
|
|
</cpu>
|
2017-07-17 12:26:10 +02:00
|
|
|
{% endif %}
|
2017-07-03 23:25:43 +02:00
|
|
|
{% endblock %}
|
2016-06-13 22:09:48 +02:00
|
|
|
<os>
|
2016-10-04 11:30:29 +02:00
|
|
|
{% block os %}
|
2017-07-17 12:26:10 +02:00
|
|
|
{% if vm.virt_mode == 'hvm' %}
|
2016-10-04 11:30:29 +02:00
|
|
|
<type arch="x86_64" machine="xenfv">hvm</type>
|
2017-09-15 16:01:15 +02:00
|
|
|
<!--
|
|
|
|
For the libxl backend libvirt switches between OVMF (UEFI)
|
|
|
|
and SeaBIOS based on the loader type. This has nothing to
|
|
|
|
do with the hvmloader binary.
|
|
|
|
-->
|
|
|
|
<loader type="{{ "pflash" if vm.features.check_with_template('uefi', False) else "rom" }}">hvmloader</loader>
|
2016-10-04 11:30:29 +02:00
|
|
|
<boot dev="cdrom" />
|
|
|
|
<boot dev="hd" />
|
|
|
|
{% else %}
|
2017-10-02 22:23:27 +02:00
|
|
|
{% if vm.virt_mode == 'pvh' %}
|
2019-02-19 00:56:25 +01:00
|
|
|
<type arch="x86_64" machine="xenpvh">xenpvh</type>
|
2017-10-02 22:23:27 +02:00
|
|
|
{% else %}
|
|
|
|
<type arch="x86_64" machine="xenpv">linux</type>
|
|
|
|
{% endif %}
|
2016-10-04 11:30:29 +02:00
|
|
|
<kernel>{{ vm.storage.kernels_dir }}/vmlinuz</kernel>
|
|
|
|
<initrd>{{ vm.storage.kernels_dir }}/initramfs</initrd>
|
|
|
|
{% endif %}
|
2017-12-14 02:17:42 +01:00
|
|
|
{% if vm.kernel %}
|
2018-11-03 05:23:50 +01:00
|
|
|
{% if vm.features.check_with_template('no-default-kernelopts', False) -%}
|
|
|
|
<cmdline>{{ vm.kernelopts }}</cmdline>
|
|
|
|
{% else -%}
|
2019-02-25 04:59:46 +01:00
|
|
|
<cmdline>{{ vm.kernelopts_common }}{{ vm.kernelopts }}</cmdline>
|
2018-11-03 05:23:50 +01:00
|
|
|
{% endif -%}
|
2017-12-14 02:17:42 +01:00
|
|
|
{% endif %}
|
2016-10-04 11:30:29 +02:00
|
|
|
{% endblock %}
|
2016-06-13 22:09:48 +02:00
|
|
|
</os>
|
2016-03-02 12:17:29 +01:00
|
|
|
|
2016-06-13 23:20:39 +02:00
|
|
|
<features>
|
2016-10-04 11:30:29 +02:00
|
|
|
{% block features %}
|
2017-10-02 22:23:27 +02:00
|
|
|
{% if vm.virt_mode != 'pv' %}
|
2016-10-04 11:30:29 +02:00
|
|
|
<pae/>
|
|
|
|
<acpi/>
|
|
|
|
<apic/>
|
|
|
|
<viridian/>
|
|
|
|
{% endif %}
|
2020-06-10 15:40:28 +02:00
|
|
|
|
|
|
|
{% if vm.devices['pci'].persistent() | list
|
|
|
|
and vm.features.get('pci-e820-host', True) %}
|
2016-10-04 11:30:29 +02:00
|
|
|
<xen>
|
|
|
|
<e820_host state="on"/>
|
|
|
|
</xen>
|
2020-06-10 15:40:28 +02:00
|
|
|
{% endif %}
|
2016-10-04 11:30:29 +02:00
|
|
|
{% endblock %}
|
2016-06-13 23:20:39 +02:00
|
|
|
</features>
|
|
|
|
|
2016-10-04 11:30:29 +02:00
|
|
|
{% block clock %}
|
2017-07-17 12:26:10 +02:00
|
|
|
{% if vm.virt_mode == 'hvm' %}
|
2016-10-04 11:30:29 +02:00
|
|
|
{% set timezone = vm.features.check_with_template('timezone', 'localtime').lower() %}
|
|
|
|
{% if timezone == 'localtime' %}
|
|
|
|
<clock offset="variable" adjustment="0" basis="localtime" />
|
|
|
|
{% elif timezone.isdigit() %}
|
2019-06-21 20:36:24 +02:00
|
|
|
<clock offset="variable" adjustment="{{ timezone }}" basis="utc" />
|
2016-10-04 11:30:29 +02:00
|
|
|
{% else %}
|
2019-06-21 20:36:24 +02:00
|
|
|
<clock offset="variable" adjustment="0" basis="utc" />
|
2016-10-04 11:30:29 +02:00
|
|
|
{% endif %}
|
2016-03-16 18:07:49 +01:00
|
|
|
{% else %}
|
2016-10-04 11:30:29 +02:00
|
|
|
<clock offset='utc' adjustment='reset'>
|
|
|
|
<timer name="tsc" mode="native"/>
|
|
|
|
</clock>
|
2016-03-16 18:07:49 +01:00
|
|
|
{% endif %}
|
2016-10-04 11:30:29 +02:00
|
|
|
{% endblock %}
|
2016-03-02 12:17:29 +01:00
|
|
|
|
2016-10-04 11:30:29 +02:00
|
|
|
{% block on %}
|
|
|
|
<on_poweroff>destroy</on_poweroff>
|
|
|
|
<on_reboot>destroy</on_reboot>
|
|
|
|
<on_crash>destroy</on_crash>
|
|
|
|
{% endblock %}
|
2016-03-02 12:17:29 +01:00
|
|
|
|
2016-10-04 11:30:29 +02:00
|
|
|
<devices>
|
|
|
|
{% block devices %}
|
2020-01-22 11:10:50 +01:00
|
|
|
{#
|
|
|
|
HACK: The letter counter is implemented in this way because
|
|
|
|
Jinja does not allow you to increment variables in a loop
|
|
|
|
anymore. As of Jinja 2.10, we will be able to replace this
|
|
|
|
with:
|
|
|
|
{% set counter = namespace(i=0) %}
|
|
|
|
{% set counter.i = counter.i + 1 %}
|
|
|
|
#}
|
|
|
|
{% set counter = {'i': 0} %}
|
2016-10-04 11:30:29 +02:00
|
|
|
{# TODO Allow more volumes out of the box #}
|
|
|
|
{% set dd = ['e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p',
|
|
|
|
'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y']
|
|
|
|
%}
|
|
|
|
{% for device in vm.block_devices %}
|
|
|
|
<disk type="block" device="{{ device.devtype }}">
|
|
|
|
<driver name="phy" />
|
|
|
|
<source dev="{{ device.path }}" />
|
|
|
|
{% if device.name == 'root' %}
|
|
|
|
<target dev="xvda" />
|
|
|
|
{% elif device.name == 'private' %}
|
|
|
|
<target dev="xvdb" />
|
|
|
|
{% elif device.name == 'volatile' %}
|
|
|
|
<target dev="xvdc" />
|
|
|
|
{% elif device.name == 'kernel' %}
|
|
|
|
<target dev="xvdd" />
|
|
|
|
{% else %}
|
2020-01-22 11:10:50 +01:00
|
|
|
<target dev="xvd{{dd[counter.i]}}" />
|
|
|
|
{% if counter.update({'i': counter.i + 1}) %}{% endif %}
|
2016-10-04 11:30:29 +02:00
|
|
|
{% endif %}
|
2016-03-02 12:17:29 +01:00
|
|
|
|
2016-10-04 11:30:29 +02:00
|
|
|
{% if not device.rw %}
|
|
|
|
<readonly />
|
|
|
|
{% endif %}
|
2016-03-02 12:17:29 +01:00
|
|
|
|
2016-10-04 11:30:29 +02:00
|
|
|
{% if device.domain %}
|
2017-07-17 12:28:56 +02:00
|
|
|
<backenddomain name="{{ device.domain }}" />
|
2016-10-04 11:30:29 +02:00
|
|
|
{% endif %}
|
2016-03-02 12:17:29 +01:00
|
|
|
|
2016-10-04 11:30:29 +02:00
|
|
|
{% if device.script %}
|
|
|
|
<script path="{{ device.script }}" />
|
|
|
|
{% endif %}
|
|
|
|
</disk>
|
|
|
|
{% endfor %}
|
2016-03-02 12:17:29 +01:00
|
|
|
|
2017-05-29 21:20:06 +02:00
|
|
|
{# start external devices from xvdi #}
|
2020-01-22 11:10:50 +01:00
|
|
|
{% set counter = {'i': 4} %}
|
2017-05-29 21:20:06 +02:00
|
|
|
{% for assignment in vm.devices.block.assignments(True) %}
|
|
|
|
{% set device = assignment.device %}
|
|
|
|
{% set options = assignment.options %}
|
|
|
|
{% include 'libvirt/devices/block.xml' %}
|
|
|
|
{% endfor %}
|
|
|
|
|
2016-10-04 11:30:29 +02:00
|
|
|
{% if vm.netvm %}
|
|
|
|
{% include 'libvirt/devices/net.xml' with context %}
|
|
|
|
{% endif %}
|
2016-03-02 12:17:29 +01:00
|
|
|
|
2017-05-22 01:01:45 +02:00
|
|
|
{% for assignment in vm.devices.pci.assignments(True) %}
|
|
|
|
{% set device = assignment.device %}
|
|
|
|
{% set options = assignment.options %}
|
2016-10-04 11:30:29 +02:00
|
|
|
{% include 'libvirt/devices/pci.xml' %}
|
|
|
|
{% endfor %}
|
2016-03-02 12:17:29 +01:00
|
|
|
|
2017-07-17 12:26:10 +02:00
|
|
|
{% if vm.virt_mode == 'hvm' %}
|
2017-12-14 02:17:42 +01:00
|
|
|
<!-- server_ip is the address of stubdomain. It hosts it's own DNS server. -->
|
2016-10-04 11:30:29 +02:00
|
|
|
<emulator
|
2017-05-16 09:01:40 +02:00
|
|
|
{% if vm.features.check_with_template('linux-stubdom', True) %}
|
2017-05-04 20:48:27 +02:00
|
|
|
type="stubdom-linux"
|
|
|
|
{% else %}
|
|
|
|
type="stubdom"
|
|
|
|
{% endif %}
|
2020-01-28 19:44:23 +01:00
|
|
|
{% if vm.netvm %}
|
|
|
|
{% if vm.features.check_with_template('linux-stubdom', True) %}
|
|
|
|
cmdline="-qubes-net:client_ip={{ vm.ip -}}
|
|
|
|
,dns_0={{ vm.dns[0] -}}
|
|
|
|
,dns_1={{ vm.dns[1] -}}
|
|
|
|
,gw={{ vm.netvm.gateway -}}
|
|
|
|
,netmask={{ vm.netmask }}"
|
|
|
|
{% else %}
|
2016-10-04 11:30:29 +02:00
|
|
|
cmdline="-net lwip,client_ip={{ vm.ip -}}
|
2017-01-14 05:05:11 +01:00
|
|
|
,server_ip={{ vm.dns[1] -}}
|
2019-06-21 20:40:04 +02:00
|
|
|
,dns={{ vm.dns[0] -}}
|
2017-01-14 05:05:11 +01:00
|
|
|
,gw={{ vm.netvm.gateway -}}
|
2016-10-04 11:30:29 +02:00
|
|
|
,netmask={{ vm.netmask }}"
|
2020-01-28 19:44:23 +01:00
|
|
|
{% endif %}
|
2016-10-04 11:30:29 +02:00
|
|
|
{% endif %}
|
2017-05-04 20:48:27 +02:00
|
|
|
{% if vm.stubdom_mem %}
|
|
|
|
memory="{{ vm.stubdom_mem * 1024 -}}"
|
|
|
|
{% endif %}
|
2016-10-04 11:30:29 +02:00
|
|
|
/>
|
|
|
|
<input type="tablet" bus="usb"/>
|
2020-06-10 15:40:28 +02:00
|
|
|
<video>
|
|
|
|
<model type="{{ vm.features.check_with_template('video-model', 'vga') }}"/>
|
|
|
|
</video>
|
|
|
|
{% if vm.features.check_with_template('linux-stubdom', True) %}
|
|
|
|
{# TODO only add qubes gui if gui-agent is not installed in HVM #}
|
|
|
|
<graphics type="qubes"/>
|
2017-05-04 20:48:27 +02:00
|
|
|
{% endif %}
|
2019-04-30 18:18:58 +02:00
|
|
|
{% endif %}
|
2016-10-04 11:30:29 +02:00
|
|
|
<console type="pty">
|
|
|
|
<target type="xen" port="0"/>
|
|
|
|
</console>
|
|
|
|
{% endblock %}
|
2016-06-13 22:09:48 +02:00
|
|
|
</devices>
|
2016-03-02 12:17:29 +01:00
|
|
|
</domain>
|
2020-01-22 11:10:50 +01:00
|
|
|
|
2016-03-02 12:17:29 +01:00
|
|
|
<!-- vim: set ft=jinja ts=4 sts=4 sw=4 et tw=80 : -->
|