xl.conf man page
/etc/xen/xl.conf — XL Global/Host Configuration
The xl.conf file allows configuration of hostwide
xl toolstack options.
For details of per-domain configuration options please see xl.cfg(5).
The config file consists of a series of
VALUE is one of:
A string, surrounded by either single or double quotes.
A number, in either decimal, octal (using a
0prefix) or hexadecimal (using an
True(any other value).
- [ VALUE, VALUE, ... ]
A list of
VALUESof the above types. Lists are homogeneous and are not nested.
The semantics of each
KEY defines which form of
VALUE is required.
If set to “on” then
xlwill automatically reduce the amount of memory assigned to domain 0 in order to free memory for new domains.
If set to “off” then
xlwill not automatically reduce the amount of domain 0 memory.
If set to “auto” then auto-ballooning will be disabled if the
dom0_memoption was provided on the Xen command line.
You are strongly recommended to set this to
"auto") if you use the
dom0_memhypervisor command line to reduce the amount of memory given to domain 0 by default.
If disabled hotplug scripts will be called from udev, as it used to be in the previous releases. With the default option, hotplug scripts will be launched by xl directly.
Sets the path to the lock file used by xl to serialise certain operations (primarily domain creation).
Sets the default value for the
max_grant_framesdomain config value.
32on hosts up to 16TB of memory,
64on hosts larger than 16TB
Sets the default value for the
max_maptrack_framesdomain config value.
Configures the default hotplug script used by virtual network devices.
The old vifscript option is deprecated and should not be used.
Configures the default bridge to set for virtual network devices.
The old defaultbridge option is deprecated and should not be used.
Configures the default backend to set for virtual network devices.
Configures the default gateway device to set for virtual network devices.
Configures the default script used by Remus to setup network buffering.
Configures the default script used by COLO to setup colo-proxy.
Configures the default output format used by xl when printing “machine readable” information. The default is to use the
JSON<http://www.json.org/> syntax. However for compatibility with the previous
xmtoolstack this can be configured to use the old
SXP(S-Expression-like) syntax instead.
Configures the name of the first block device to be used for temporary block device allocations by the toolstack. The default choice is “xvda”.
If this option is enabled then when a guest is created there will be an guarantee that there is memory available for the guest. This is an particularly acute problem on hosts with memory over-provisioned guests that use tmem and have self-balloon enabled (which is the default option). The self-balloon mechanism can deflate/inflate the balloon quickly and the amount of free memory (which
xl infocan show) is stale the moment it is printed. When claim is enabled a reservation for the amount of memory (see 'memory' in xl.conf(5)) is set, which is then reduced as the domain's memory is populated and eventually reaches zero. The free memory in
xl infois the combination of the hypervisor's free heap memory minus the outstanding claims value.
If the reservation cannot be meet the guest creation fails immediately instead of taking seconds/minutes (depending on the size of the guest) while the guest is populated.
Note that to enable tmem type guests, one needs to provide
tmemon the Xen hypervisor argument and as well on the Linux kernel command line.
No claim is made. Memory population during guest creation will be attempted as normal and may fail due to memory exhaustion.
Normal memory and freeable pool of ephemeral pages (tmem) is used when calculating whether there is enough memory free to launch a guest. This guarantees immediate feedback whether the guest can be launched due to memory exhaustion (which can take a long time to find out if launching massively huge guests).
Global masks that are applied when creating guests and pinning vcpus to indicate which cpus they are allowed to run on. Specifically,
vm.cpumaskapplies to all guest types,
vm.hvm.cpumaskapplies to both HVM and PVH guests and
vm.pv.cpumaskapplies to PV guests.
The hard affinity of guest's vcpus are logical-AND'ed with respective masks. If the resulting affinity mask is empty, operation will fail.
Use --ignore-global-affinity-masks to skip applying global masks.
The default value for these masks are all 1's, i.e. all cpus are allowed.
Due to bug(s), these options may not interact well with other options concerning CPU affinity. One example is CPU pools. Users should always double check that the required affinity has taken effect.