pungi - Man Page


pungi — Pungi Documentation


About Pungi

[image: Pungi Logo] [image]

Pungi is a distribution compose tool.

Composes are release snapshots that contain release deliverables such as:

Tool overview

Pungi consists of multiple separate executables backed by a common library.

The main entry-point is the pungi-koji script. It loads the compose configuration and kicks off the process. Composing itself is done in phases. Each phase is responsible for generating some artifacts on disk and updating the compose object that is threaded through all the phases.

Pungi itself does not actually do that much. Most of the actual work is delegated to separate executables. Pungi just makes sure that all the commands are invoked in the appropriate order and with correct arguments. It also moves the artifacts to correct locations.

The executable name pungi-koji comes from the fact that most of those separate executables submit tasks to Koji that does the actual work in an auditable way.

However unlike doing everything manually in Koji, Pungi will make sure you are building all images from the same package set, and will produce even deliverables that Koji can not create like YUM repos and installer ISOs.

Origin of name

The name Pungi comes from the instrument used to charm snakes. Anaconda being the software Pungi was manipulating, and anaconda being a snake, led to the referential naming.

The first name, which was suggested by Seth Vidal, was FIST, Fedora Installation <Something> Tool. That name was quickly discarded and replaced with Pungi.

There was also a bit of an inside joke that when said aloud, it could sound like punji, which is a sharpened stick at the bottom of a trap. Kind of like software…


Each invocation of pungi-koji consists of a set of phases. [image: phase diagram] [image]

Most of the phases run sequentially (left-to-right in the diagram), but there are use cases where multiple phases run in parallel. This happens for phases whose main point is to wait for a Koji task to finish.


The first phase to ever run. Can not be skipped. It prepares the comps files for variants (by filtering out groups and packages that should not be there). See Processing comps files for details about how this is done.


This phase loads a set of packages that should be composed. It has two separate results: it prepares repos with packages in work/ directory (one per arch) for further processing, and it returns a data structure with mapping of packages to architectures.


Spawns a bunch of threads, each of which runs either lorax or buildinstall command (the latter coming from anaconda package). The commands create boot.iso and other boot configuration files. The image is finally linked into the compose/ directory as netinstall media.

The created images are also needed for creating live media or other images in later phases.

With lorax this phase runs one task per variant.arch combination. For buildinstall command there is only one task per architecture and product.img should be used to customize the results.


This phase uses data collected by pkgset phase and figures out what packages should be in each variant. The basic mapping can come from comps file, a JSON mapping or additional_packages config option. This inputs can then be enriched by adding all dependencies. See Gathering packages for details.

Once the mapping is finalized, the packages are linked to appropriate places and the rpms.json manifest is created.


This phase collects extra files from the configuration and copies them to the compose directory. The files are described by a JSON file in the compose subtree where the files are copied. This metadata is meant to be distributed with the data (on ISO images).


This phase creates RPM repositories for each variant.arch tree. It is actually reading the rpms.json manifest to figure out which packages should be included.


Updates an ostree repository with a new commit with packages from the compose. The repository lives outside of the compose and is updated immediately. If the compose fails in a later stage, the commit will not be reverted.

Implementation wise, this phase runs rpm-ostree command in Koji runroot (to allow running on different arches).


Generates ISO files and accumulates enough metadata to be able to create image.json manifest. The file is however not created in this phase, instead it is dumped in the pungi-koji script itself.

The files include a repository with all RPMs from the variant. There will be multiple images if the packages do not fit on a single image.

The image will be bootable if buildinstall phase is enabled and the packages fit on a single image.

There can also be images with source repositories. These are never bootable.


This phase is very similar to createiso, except it combines content from multiple variants onto a single image. Packages, repodata and extra files from each configured variant are put into a subdirectory. Additional extra files can be put into top level of the image. The image will be bootable if the main variant is bootable.

LiveImages, LiveMedia

Creates media in Koji with koji spin-livecd, koji spin-appliance or koji spin-livemedia command. When the media are finished, the images are copied into the compose/ directory and metadata for images is updated.


This phase wraps up koji image-build. It also updates the metadata ultimately responsible for images.json manifest.


Similarly to image build, this phases creates a koji osbuild task. In the background it uses OSBuild Composer to create images.


This phase builds container base images in OSBS.

The finished images are available in registry provided by OSBS, but not downloaded directly into the compose. The is metadata about the created image in compose/metadata/osbs.json.


This phase builds a container image in Osbs, and stores the metadata in the same file as Osbs phase. The container produced here wraps a different image, created it ImageBuild or OSBuild phase. It can be useful to deliver a VM image to containerized environments.


Creates bootable media that carry an ostree repository as a payload. These images are created by running lorax with special templates. Again it runs in Koji runroot.


Run repoclosure on each repository. By default errors are only reported in the log, the compose will still be considered a success. The actual error has to be looked up in the compose logs directory. Configuration allows customizing this.


Responsible for generating checksums for the images. The checksums are stored in image manifest as well as files on disk. The list of images to be processed is obtained from the image manifest. This way all images will get the same checksums irrespective of the phase that created them.


This phase is supposed to run some sanity checks on the finished compose.

The only test is to check all images listed the metadata and verify that they look sane. For ISO files headers are checked to verify the format is correct, and for bootable media a check is run to verify they have properties that allow booting.

Config File Format

The configuration file parser is provided by kobo

The file follows a Python-like format. It consists of a sequence of variables that have a value assigned to them.

variable = value

The variable names must follow the same convention as Python code: start with a letter and consist of letters, digits and underscores only.

The values can be either an integer, float, boolean (True or False), a string or None. Strings must be enclosed in either single or double quotes.

Complex types are supported as well.

A list is enclosed in square brackets and items are separated with commas. There can be a comma after the last item as well.

a_list = [1,

A tuple works like a list, but is enclosed in parenthesis.

a_tuple = (1, "one")

A dictionary is wrapped in brackets, and consists of key: value pairs separated by commas. The keys can only be formed from basic types (int, float, string).

a_dict = {
    'foo': 'bar',
    1: None

The value assigned to a variable can also be taken from another variable.

one = 1
another = one

Anything on a line after a # symbol is ignored and functions as a comment.

Importing other files

It is possible to include another configuration file. The files are looked up relative to the currently processed file.

The general structure of import is:

from FILENAME import WHAT

The FILENAME should be just the base name of the file without extension (which must be .conf). WHAT can either be a comma separated list of variables or *.

# Opens constants.conf and brings PI and E into current scope.
from constants import PI, E

# Opens common.conf and brings everything defined in that file into current
# file as well.
from common import *

Pungi will copy the configuration file given on command line into the logs/ directory. Only this single file will be copied, not any included ones. (Copying included files requires a fix in kobo library.)

The JSON-formatted dump of configuration is correct though.

Formatting strings

String interpolation is available as well. It uses a %-encoded format. See Python documentation for more details.

joined = "%s %s" % (var_a, var_b)

a_dict = {
    "fst": 1,
    "snd": 2,
another = "%(fst)s %(snd)s" % a_dict


Please read productmd documentation for terminology and other release and compose related details.

Minimal Config Example

release_name = "Fedora"
release_short = "Fedora"
release_version = "23"

comps_file = "comps-f23.xml"
variants_file = "variants-f23.xml"

koji_profile = "koji"
runroot = False

sigkeys = [None]
pkgset_source = "koji"
pkgset_koji_tag = "f23"

gather_method = "deps"
greedy_method = "build"
check_deps = False

buildinstall_method = "lorax"


Following mandatory options describe a release.


release_name [mandatory]

(str) – release name

release_short [mandatory]

(str) – release short name, without spaces and special characters

release_version [mandatory]

(str) – release version

release_type = “ga” (str) – release type, for example ga,

updates or updates-testing. See list of all valid values in productmd documentation.

release_internal = False

(bool) – whether the compose is meant for public consumption


(str) Version to display in .treeinfo files. If not configured, the value from release_version will be used.


release_name = "Fedora"
release_short = "Fedora"
release_version = "23"
# release_type = "ga"

Base Product

Base product options are optional and we need to them only if we’re composing a layered product built on another (base) product.



(str) – base product name


(str) – base product short name, without spaces and special characters


(str) – base product major version

base_product_type = “ga”

(str) – base product type, “ga”, “updates” etc., for full list see documentation of productmd.


release_name = "RPM Fusion"
release_short = "rf"
release_version = "23.0"

base_product_name = "Fedora"
base_product_short = "Fedora"
base_product_version = "23"

General Settings


comps_file [mandatory]

(scm_dict, str or None) – reference to comps XML file with installation groups

variants_file [mandatory]

(scm_dict or str) – reference to variants XML file that defines release variants and architectures

module_defaults_dir [optional]

(scm_dict or str) – reference the module defaults directory containing modulemd-defaults YAML documents. Files relevant for modules included in the compose will be embedded in the generated repodata and available for DNF.

module_defaults_dir = {
    "scm": "git",
    "repo": "https://pagure.io/releng/fedora-module-defaults.git",
    "dir": ".",
failable_deliverables [optional]

(list) – list which deliverables on which variant and architecture can fail and not abort the whole compose. This only applies to buildinstall and iso parts. All other artifacts can be configured in their respective part of configuration.

Please note that * as a wildcard matches all architectures but src.

comps_filter_environments [optional]

(bool) – When set to False, the comps files for variants will not have their environments filtered to match the variant.


([str]) – list of architectures which should be included; if undefined, all architectures from variants.xml will be included


([str]) – list of variants which should be included; if undefined, all variants from variants.xml will be included


(list) – variant/arch mapping describing how repoclosure should run. Possible values are

  • off – do not run repoclosure
  • lenient – (default) run repoclosure and write results to logs, but detected errors are only reported in logs
  • fatal – abort compose when any issue is detected

When multiple blocks in the mapping match a variant/arch combination, the last value will win.


(str) – Select which tool should be used to run repoclosure over created repositories. By default yum is used, but you can switch to dnf. Please note that when dnf is used, the build dependencies check is skipped. On Python 3, only dnf backend is available.

See also: the gather_backend setting for Pungi’s gather phase.


(str) – URL to Compose Tracking Service. If defined, Pungi will add the compose to Compose Tracking Service and ge the compose ID from it. For example https://cts.localhost.tld/


(str) – Path to Kerberos keytab which will be used for Compose Tracking Service Kerberos authentication. If not defined, the default Kerberos principal is used.


(str) – URL to the OIDC token endpoint. For example https://oidc.example.com/openid-connect/token. This option can be overridden by the environment variable CTS_OIDC_TOKEN_URL.


cts_oidc_client_id* (str) – OIDC client ID. This option can be overridden by the environment variable CTS_OIDC_CLIENT_ID. Note that environment variable CTS_OIDC_CLIENT_SECRET must be configured with corresponding client secret to authenticate to CTS via OIDC.


(str) – Allows to set default compose type. Type set via a command-line option overwrites this.


(str) – URL to Module Build Service (MBS) API. For example https://mbs.example.com/module-build-service/2. This is required by pkgset_scratch_modules.


comps_file = {
    "scm": "git",
    "repo": "https://git.fedorahosted.org/git/comps.git",
    "branch": None,
    "file": "comps-f23.xml.in",

variants_file = {
    "scm": "git",
    "repo": "https://pagure.io/pungi-fedora.git ",
    "branch": None,
    "file": "variants-fedora.xml",

failable_deliverables = [
    ('^.*$', {
        # Buildinstall can fail on any variant and any arch
        '*': ['buildinstall'],
        'src': ['buildinstall'],
        # Nothing on i386 blocks the compose
        'i386': ['buildinstall', 'iso', 'live'],

tree_arches = ["x86_64"]
tree_variants = ["Server"]

repoclosure_strictness = [
    # Make repoclosure failures fatal for compose on all variants …
    ('^.*$', {'*': 'fatal'}),
    # … except for Everything where it should not run at all.
    ('^Everything$', {'*': 'off'})

Image Naming

Both image name and volume id are generated based on the configuration. Since the volume id is limited to 32 characters, there are more settings available. The process for generating volume id is to get a list of possible formats and try them sequentially until one fits in the length limit. If substitutions are configured, each attempted volume id will be modified by it.

For layered products, the candidate formats are first image_volid_layered_product_formats followed by image_volid_formats. Otherwise, only image_volid_formats are tried.

If no format matches the length limit, an error will be reported and compose aborted.


There a couple common format specifiers available for both the options:
  • compose_id
  • release_short
  • version
  • date
  • respin
  • type
  • type_suffix
  • label
  • label_major_version
  • variant
  • arch
  • disc_type
image_name_format [optional]

(str|dict) – Python’s format string to serve as template for image names. The value can also be a dict mapping variant UID regexes to the format string. The pattern should not overlap, otherwise it is undefined which one will be used.

This format will be used for all phases generating images. Currently that means createiso, live_images and buildinstall.

Available extra keys are:
  • disc_num
  • suffix
image_volid_formats [optional]

(list) – A list of format strings for generating volume id.

The extra available keys are:
  • base_product_short
  • base_product_version
image_volid_layered_product_formats [optional]

(list) – A list of format strings for generating volume id for layered products. The keys available are the same as for image_volid_formats.

restricted_volid = False

(bool) – New versions of lorax replace all non-alphanumerical characters with dashes (underscores are preserved). This option will mimic similar behaviour in Pungi.

volume_id_substitutions [optional]

(dict) – A mapping of string replacements to shorten the volume id.

disc_types [optional]

(dict) – A mapping for customizing disc_type used in image names.

Available keys are:
  • boot – for boot.iso images created in  buildinstall phase
  • live – for images created by live_images phase
  • dvd – for images created by createiso phase
  • ostree – for ostree installer images

Default values are the same as the keys.


# Image name respecting Fedora's image naming policy
image_name_format = "%(release_short)s-%(variant)s-%(disc_type)s-%(arch)s-%(version)s%(suffix)s"
# Use the same format for volume id
image_volid_formats = [
# No special handling for layered products, use same format as for regular images
image_volid_layered_product_formats = []
# Replace "Cloud" with "C" in volume id etc.
volume_id_substitutions = {
    'Cloud': 'C',
    'Alpha': 'A',
    'Beta': 'B',
    'TC': 'T',

disc_types = {
    'boot': 'netinst',
    'live': 'Live',
    'dvd': 'DVD',


If you want to sign deliverables generated during pungi run like RPM wrapped images. You must provide few configuration options:

signing_command [optional]

(str) – Command that will be run with a koji build as a single argument. This command must not require any user interaction. If you need to pass a password for a signing key to the command, do this via command line option of the command and use string formatting syntax %(signing_key_password)s. (See signing_key_password_file).

signing_key_id [optional]

(str) – ID of the key that will be used for the signing. This ID will be used when crafting koji paths to signed files (kojipkgs.fedoraproject.org/packages/NAME/VER/REL/data/signed/KEYID/..).

signing_key_password_file [optional]

(str) – Path to a file with password that will be formatted into signing_command string via %(signing_key_password)s string format syntax (if used). Because pungi config is usually stored in git and is part of compose logs we don’t want password to be included directly in the config. Note: If - string is used instead of a filename, then you will be asked for the password interactivelly right after pungi starts.


signing_command = '~/git/releng/scripts/sigulsign_unsigned.py -vv --password=%(signing_key_password)s fedora-24'
signing_key_id = '81b46521'
signing_key_password_file = '~/password_for_fedora-24_key'

Git URLs

In multiple places the config requires URL of a Git repository to download some file from. This URL is passed on to Koji. It is possible to specify which commit to use using this syntax:


The <rev_spec> pattern can be replaced with actual commit SHA, a tag name, HEAD to indicate that tip of default branch should be used or origin/<branch_name> to use tip of arbitrary branch.

If the URL specifies a branch or HEAD, Pungi will replace it with the actual commit SHA. This will later show up in Koji tasks and help with tracing what particular inputs were used.


The origin must be specified because of the way Koji works with the repository. It will clone the repository then switch to requested state with git reset --hard REF. Since no local branches are created, we need to use full specification including the name of the remote.

Createrepo Settings



(str) – specify checksum type for createrepo; expected values: sha512, sha256, sha1. Defaults to sha256.

createrepo_c = True

(bool) – use createrepo_c (True) or legacy createrepo (False)

createrepo_deltas = False

(list) – generate delta RPMs against an older compose. This needs to be used together with --old-composes command line argument. The value should be a mapping of variants and architectures that should enable creating delta RPMs. Source and debuginfo repos never have deltas.

createrepo_use_xz = False

(bool) – whether to pass --xz to the createrepo command. This will cause the SQLite databases to be compressed with xz.


(int) – how many concurrent createrepo process to run. The default is to use one thread per CPU available on the machine.


(int) – how many concurrent createrepo workers to run. Value defaults to 3.


(bool) – whether to create SQLite database as part of the repodata. This is only useful as an optimization for clients using Yum to consume to the repo. Default value depends on gather backend. For DNF it’s turned off, for Yum the default is True.


([str]) – a list of extra arguments passed on to createrepo or createrepo_c executable. This could be useful for enabling zchunk generation and pointing it to correct dictionaries.


(dict) – a mapping of variant UID to an scm dict. If specified, it should point to a directory with extra module metadata YAML files that will be added to the repository for this variant. The cloned files should be split into subdirectories for each architecture of the variant.

createrepo_enable_cache = True

(bool) – whether to use --cachedir option of createrepo. It will cache and reuse checksum vaules to speed up createrepo phase. The cache dir is located at /var/cache/pungi/createrepo_c/$release_short-$uid e.g. /var/cache/pungi/createrepo_c/Fedora-1000

product_id = None

(scm_dict) – If specified, it should point to a directory with certificates *<variant_uid>-<arch>-*.pem. Pungi will copy each certificate file into the relevant Yum repositories as a productid file in the repodata directories. The purpose of these productid files is to expose the product data to subscription-manager. subscription-manager includes a “product-id” Yum plugin that can read these productid certificate files from each Yum repository.

product_id_allow_missing = False

(bool) – When product_id is used and a certificate for some variant and architecture is missing, Pungi will exit with an error by default. When you set this option to True, Pungi will ignore the missing certificate and simply log a warning message.

product_id_allow_name_prefix = True

(bool) – Allow arbitrary prefix for the certificate file name (see leading * in the pattern above). Setting this option to False will make the pattern more strict by requiring the file name to start directly with variant name.


createrepo_checksum = "sha"
createrepo_deltas = [
    # All arches for Everything should have deltas.
    ('^Everything$', {'*': True}),
    # Also Server.x86_64 should have them (but not on other arches).
    ('^Server$', {'x86_64': True}),
createrepo_extra_modulemd = {
    "Server": {
        "scm": "git",
        "repo": "https://example.com/extra-server-modulemd.git",
        "dir": ".",
        # The directory should have this layout. Each architecture for the
        # variant should be included (even if the directory is empty.
        # .
        # ├── aarch64
        # │   ├── some-file.yaml
        # │   └ ...
        # └── x86_64

Package Set Settings



([str or None]) – priority list of signing key IDs. These key IDs match the key IDs for the builds in Koji. Pungi will choose signed packages according to the order of the key IDs that you specify here. Use one single key in this list to ensure that all RPMs are signed by one key. If the list includes an empty string or None, Pungi will allow unsigned packages. If the list only includes None, Pungi will use all unsigned packages.

pkgset_source [mandatory]

(str) – “koji” (any koji instance) or “repos” (arbitrary yum repositories)


(str|[str]) – tag(s) to read package set from. This option can be omitted for modular composes.


(str|[str]) – extra build(s) to include in a package set defined as NVRs.


(str|[str]) – RPM scratch build task(s) to include in a package set, defined as task IDs. This option can be used only when compose_type is set to test. The RPM still needs to have higher NVR than any other RPM with the same name coming from other sources in order to appear in the resulting compose.


(str|[str]) – tags to read module from. This option works similarly to listing tags in variants XML. If tags are specified and variants XML specifies some modules via NSVC (or part of), only modules matching that list will be used (and taken from the tag). Inheritance is used automatically.


(dict) – A mapping of variants to extra module builds to include in a package set: {variant: [N:S:V:C]}.

pkgset_koji_inherit = True

(bool) – inherit builds from parent tags; we can turn it off only if we have all builds tagged in a single tag

pkgset_koji_inherit_modules = False

(bool) – the same as above, but this only applies to modular tags. This option applies to the content tags that contain the RPMs.


(dict) – A mapping of architectures to repositories with RPMs: {arch: [repo]}. Only use when pkgset_source = "repos".


(dict) – A mapping of variants to scratch module builds: {variant: [N:S:V:C]}. Requires mbs_api_url.

pkgset_exclusive_arch_considers_noarch = True

(bool) – If a package includes noarch in its ExclusiveArch tag, it will be included in all architectures since noarch is compatible with everything. Set this option to False to ignore noarch in ExclusiveArch and always consider only binary architectures.

pkgset_inherit_exclusive_arch_to_noarch = True

(bool) – When set to True, the value of ExclusiveArch or ExcludeArch will be copied from source rpm to all its noarch packages. That will than limit which architectures the noarch packages can be included in.

By setting this option to False this step is skipped, and noarch packages will by default land in all architectures. They can still be excluded by listing them in a relevant section of filter_packages.

pkgset_allow_reuse = True

(bool) – When set to True, Pungi will try to reuse pkgset data from the old composes specified by --old-composes. When enabled, this option can speed up new composes because it does not need to calculate the pkgset data from Koji. However, if you block or unblock a package in Koji (for example) between composes, then Pungi may not respect those changes in your new compose.

signed_packages_retries = 0

(int) – In automated workflows, you might start a compose before Koji has completely written all signed packages to disk. In this case you may want Pungi to wait for the package to appear in Koji’s storage. This option controls how many times Pungi will retry looking for the signed copy.

signed_packages_wait = 30

(int) – Interval in seconds for how long to wait between attempts to find signed packages. This option only makes sense when signed_packages_retries is set higher than 0.


sigkeys = [None]
pkgset_source = "koji"
pkgset_koji_tag = "f23"

Buildinstall Settings

Script or process that creates bootable images with Anaconda installer is historically called buildinstall.



(str) – “lorax” (f16+, rhel7+) or “buildinstall” (older releases)


(list) – special options passed on to lorax.

Format: [(variant_uid_regex, {arch|*: {option: name}})].

Recognized options are:
  • bugurlstr (default None)
  • nomacbootbool (default True)
  • noupgradebool (default True)
  • add_template[str] (default empty)
  • add_arch_template[str] (default empty)
  • add_template_var[str] (default empty)
  • add_arch_template_var[str] (default empty)
  • rootfs_size – [int] (default empty)
  • version – [str] (default from treeinfo_version or release_version) – used as --version and --release argument on the lorax command line
  • dracut_args – [[str]] (default empty) override arguments for dracut. Please note that if this option is used, lorax will not use any other arguments, so you have to provide a full list and can not just add something.
  • skip_brandingbool (default False)
  • squashfs_onlybool (default False) pass the –squashfs_only to Lorax.
  • configuration_file – (scm_dict) (default empty) pass the specified configuration file to Lorax using the -c option.

(list) – a variant/arch mapping with urls for extra source repositories added to Lorax command line. Either one repo or a list can be specified.

lorax_use_koji_plugin = False

(bool) – When set to True, the Koji pungi_buildinstall task will be used to execute Lorax instead of runroot. Use only if the Koji instance has the pungi_buildinstall plugin installed.


(scm_dict) – If specified, this kickstart file will be copied into each file and pointed to in boot configuration.


(str) – Full path to top directory where the runroot buildinstall Koji tasks output should be stored. This is useful in situation when the Pungi compose is not generated on the same storage as the Koji task is running on. In this case, Pungi can provide input repository for runroot task using HTTP and set the output directory for this task to buildinstall_topdir. Once the runroot task finishes, Pungi will copy the results of runroot tasks to the compose working directory.


(list) – mapping that defines which variants and arches to skip during buildinstall; format: [(variant_uid_regex, {arch|*: True})]. This is only supported for lorax.

buildinstall_allow_reuse = False

(bool) – When set to True, Pungi will try to reuse buildinstall results from old compose specified by --old-composes.


(list) – Additional packages to be installed in the runroot environment where lorax will run to create installer. Format: [(variant_uid_regex, {arch|*: [package_globs]})].


buildinstall_method = "lorax"

# Enables macboot on x86_64 for all variants and builds upgrade images
# everywhere.
lorax_options = [
    ("^.*$", {
        "x86_64": {
            "nomacboot": False
        "*": {
            "noupgrade": False

# Don't run buildinstall phase for Modular variant
buildinstall_skip = [
    ('^Modular', {
        '*': True

# Add another repository for lorax to install packages from
lorax_extra_sources = [
    ('^Simple$', {
        '*': 'https://example.com/repo/$basearch/',

# Additional packages to be installed in the Koji runroot environment where
# lorax will run.
buildinstall_packages = [
    ('^Simple$', {
        '*': ['dummy-package'],

It is advised to run buildinstall (lorax) in koji, i.e. with runroot enabled for clean build environments, better logging, etc.


Lorax installs RPMs into a chroot. This involves running %post scriptlets and they frequently run executables in the chroot. If we’re composing for multiple architectures, we must use runroot for this reason.

Gather Settings


gather_method [mandatory]

(str*|*dict) – Options are deps, nodeps and hybrid. Specifies whether and how package dependencies should be pulled in. Possible configuration can be one value for all variants, or if configured per-variant it can be a simple string hybrid or a a dictionary mapping source type to a value of deps or nodeps. Make sure only one regex matches each variant, as there is no guarantee which value will be used if there are multiple matching ones. All used sources must have a configured method unless hybrid solving is used.

gather_fulltree = False

(bool) – When set to True all RPMs built from an SRPM will always be included. Only use when gather_method = "deps".

gather_selfhosting = False

(bool) – When set to True, Pungi will build a self-hosting tree by following build dependencies. Only use when gather_method = "deps".

gather_allow_reuse = False

(bool) – When set to True, Pungi will try to reuse gather results from old compose specified by --old-composes.

greedy_method = none

(str) – This option controls how package requirements are satisfied in case a particular Requires has multiple candidates.

  • none – the best packages is selected to satisfy the dependency and only that one is pulled into the compose
  • all – packages that provide the symbol are pulled in
  • build – the best package is selected, and then all packages from the same build that provide the symbol are pulled in

As an example let’s work with this situation: a package in the compose has Requires: foo. There are three packages with Provides: foo: pkg-a, pkg-b-provider-1 and pkg-b-provider-2. The pkg-b-* packages are build from the same source package. Best match determines pkg-b-provider-1 as best matching package.

  • With greedy_method = "none" only pkg-b-provider-1 will be pulled in.
  • With greedy_method = "all" all three packages will be pulled in.
  • With greedy_method = "build" pkg-b-provider-1 and pkg-b-provider-2 will be pulled in.

(str) –This changes the entire codebase doing dependency solving, so it can change the result in unpredictable ways.

On Python 2, the choice is between yum or dnf and defaults to yum. On Python 3 dnf is the only option and default.

Particularly the multilib work is performed differently by using python-multilib library. Please refer to multilib option to see the differences.

See also: the repoclosure_backend setting for Pungi’s repoclosure phase.


(list) – mapping of variant regexes and arches to list of multilib methods

Available methods are:
  • none – no package matches this method
  • all – all packages match this method
  • runtime – packages that install some shared object file (*.so.*) will match.
  • devel – packages whose name ends with -devel or --static suffix will be matched. When dnf is used, this method automatically enables runtime method as well. With yum backend this method also uses a hardcoded blacklist and whitelist.
  • kernel – packages providing kernel or kernel-devel match this method (only in yum backend)
  • yaboot – only yaboot package on ppc arch matches this (only in yum backend)

(list) – additional packages to be included in a variant and architecture; format: [(variant_uid_regex, {arch|*: [package_globs]})]

In contrast to the comps_file setting, the additional_packages setting merely adds the list of packages to the compose. When a package is in a comps group, it is visible to users via dnf groupinstall and Anaconda’s Groups selection, but additional_packages does not affect DNF groups.

The packages specified here are matched against RPM names, not any other provides in the package nor the name of source package. Shell globbing is used, so wildcards are possible. The package can be specified as name only or name.arch.

With dnf gathering backend, you can specify a debuginfo package to be included. This is meant to include a package if autodetection does not get it. If you add a debuginfo package that does not have anything else from the same build included in the compose, the sources will not be pulled in.

If you list a package in additional_packages but Pungi cannot find it (for example, it’s not available in the Koji tag), Pungi will log a warning in the “work” or “logs” directories and continue without aborting.

Example: This configuration will add all packages in a Koji tag to an “Everything” variant:

additional_packages = [
    ('^Everything$', {
        '*': [

(list) – packages to be excluded from a variant and architecture; format: [(variant_uid_regex, {arch|*: [package_globs]})]

See additional_packages for details about package specification.


(list) – modules to be excluded from a variant and architecture; format: [(variant_uid_regex, {arch|*: [name:stream]})]

Both name and stream can use shell-style globs. If stream is omitted, all streams are removed.

This option only applies to modules taken from Koji tags, not modules explicitly listed in variants XML without any tags.


(bool) – for each variant, figure out the best system release package and filter out all others. This will not work if a variant needs more than one system release package. In such case, set this option to False.

gather_prepopulate = None

(scm_dict) – If specified, you can use this to add additional packages. The format of the file pointed to by this option is a JSON mapping {variant_uid: {arch: {build: [package]}}}. Packages added through this option can not be removed by filter_packages.


(dict) – multilib blacklist; format: {arch|*: [package_globs]}.

See additional_packages for details about package specification.


(dict) – multilib blacklist; format: {arch|*: [package_names]}. The whitelist must contain exact package names; there are no wildcards or pattern matching.

gather_lookaside_repos = []

(list) – lookaside repositories used for package gathering; format: [(variant_uid_regex, {arch|*: [repo_urls]})]

The repo_urls are passed to the depsolver, which can use packages in the repos for satisfying dependencies, but the packages themselves are not pulled into the compose. The repo_urls can contain $basearch variable, which will be substituted with proper value by the depsolver.

The repo_urls are used by repoclosure too, but it can’t parse $basearch currently and that will cause Repoclosure phase crashed. repoclosure_strictness option could be used to stop running repoclosure.

Please note that * as a wildcard matches all architectures but src.

hashed_directories = False

(bool) – put packages into “hashed” directories, for example Packages/k/kernel-4.0.4-301.fc22.x86_64.rpm

check_deps = True

(bool) – Set to False if you don’t want the compose to abort when some package has broken dependencies.

require_all_comps_packages = False

(bool) – Set to True to abort compose when package mentioned in comps file can not be found in the package set. When disabled (the default), such cases are still reported as warnings in the log.

With dnf gather backend, this option will abort the compose on any missing package no matter if it’s listed in comps, additional_packages or prepopulate file.


(str) – JSON mapping with initial packages for the compose. The value should be a path to JSON file with following mapping: {variant: {arch: {rpm_name: [rpm_arch|None]}}}. Relative paths are interpreted relative to the location of main config file.

gather_profiler = False

(bool) – When set to True the gather tool will produce additional performance profiling information at the end of its logs.  Only takes effect when gather_backend = "dnf".


(list) – a variant/variant mapping that tells one or more variants in compose has other variant(s) in compose as a lookaside. Only top level variants are supported (not addons/layered products). Format: [(variant_uid, variant_uid)]


gather_method = "deps"
greedy_method = "build"
check_deps = False
hashed_directories = True

gather_method = {
    "^Everything$": {
        "comps": "deps"     # traditional content defined by comps groups
    "^Modular$": {
        "module": "nodeps"  # Modules do not need dependencies
    "^Mixed$": {            # Mixed content in one variant
        "comps": "deps",
        "module": "nodeps"
    "^OtherMixed$": "hybrid",   # Using hybrid depsolver

additional_packages = [
    # bz#123456
    ('^(Workstation|Server)$', {
        '*': [

filter_packages = [
    # bz#111222
    ('^.*$', {
        '*': [

multilib = [
    ('^Server$', {
        'x86_64': ['devel', 'runtime']

multilib_blacklist = {
    "*": [

multilib_whitelist = {
    "*": [

# gather_lookaside_repos = [
#     ('^.*$', {
#         '*': [
#             "https://dl.fedoraproject.org/pub/fedora/linux/releases/22/Everything/$basearch/os/",
#         ],
#         'x86_64': [
#             "https://dl.fedoraproject.org/pub/fedora/linux/releases/22/Everything/source/SRPMS/",
#         ]
#     }),
# ]

It is a good practice to attach bug/ticket numbers to additional_packages, filter_packages, multilib_blacklist and multilib_whitelist to track decisions.

Koji Settings



(str) – koji profile name. This tells Pungi how to communicate with your chosen Koji instance. See Koji’s documentation about profiles for more information about how to set up your Koji client profile. In the examples, the profile name is “koji”, which points to Fedora’s koji.fedoraproject.org.


(str) – global runroot method to use. If runroot_method is set per Pungi phase using a dictionary, this option defines the default runroot method for phases not mentioned in the runroot_method dictionary.


(str*|*dict) – Runroot method to use. It can further specify the runroot method in case the runroot is set to True.

Available methods are:
  • local – runroot tasks are run locally
  • koji – runroot tasks are run in Koji
  • openssh – runroot tasks are run on remote machine connected using OpenSSH. The runroot_ssh_hostnames for each architecture must be set and the user under which Pungi runs must be configured to login as runroot_ssh_username using the SSH key.

The runroot method can also be set per Pungi phase using the dictionary with phase name as key and runroot method as value. The default runroot method is in this case defined by the global_runroot_method option.


global_runroot_method = "koji"
runroot_method = {
    "createiso": "local"

(str) – name of koji channel


(str) – name of koji build tag used for runroot


(dict) – customize task weights for various runroot tasks. The values in the mapping should be integers, the keys can be selected from the following list. By default no weight is assigned and Koji picks the default one according to policy.

  • buildinstall
  • createiso
  • ostree
  • ostree_installer


koji_profile = "koji"
runroot_channel = "runroot"
runroot_tag = "f23-build"

Runroot “openssh” method settings



(str) – For openssh runroot method, configures the username used to login the remote machine to run the runroot task. Defaults to “root”.


(dict) – For openssh runroot method, defines the hostname for each architecture on which the runroot task should be running. Format: {"x86_64": "runroot-x86-64.localhost.tld", ...}


(str) [optional] – For openssh runroot method, defines the command to initializes the runroot task on the remote machine. This command is executed as first command for each runroot task executed.

The command can print a string which is then available as {runroot_key} for other SSH commands. This string might be used to keep the context across different SSH commands executed for single runroot task.

The goal of this command is setting up the environment for real runroot commands. For example preparing the unique mock environment, mounting the desired file-systems, …

The command string can contain following variables which are replaced by the real values before executing the init command:

  • {runroot_tag} - Tag to initialize the runroot environment from.

When not set, no init command is executed.


(str) [optional] – For openssh runroot method, defines the template for command to install the packages requested to run the runroot task.

The template string can contain following variables which are replaced by the real values before executing the install command:

  • {runroot_key} - Replaced with the string returned by runroot_ssh_init_template if used. This can be used to keep the track of context of SSH commands belonging to single runroot task.
  • {packages} - White-list separated list of packages to install.

Example (The {runroot_key} is expected to be set to mock config file using the runroot_ssh_init_template command.): "mock -r {runroot_key} --install {packages}"

When not set, no command to install packages on remote machine is executed.


(str) [optional] – For openssh runroot method, defines the template for the main runroot command.

The template string can contain following variables which are replaced by the real values before executing the install command:

  • {runroot_key} - Replaced with the string returned by runroot_ssh_init_template if used. This can be used to keep the track of context of SSH commands belonging to single runroot task.
  • {command} - Command to run.

Example (The {runroot_key} is expected to be set to mock config file using the runroot_ssh_init_template command.): "mock -r {runroot_key} chroot -- {command}"

When not set, the runroot command is run directly.

Extra Files Settings



(list) – references to external files to be placed in os/ directory and media; format: [(variant_uid_regex, {arch|*: [scm_dict]})]. See Exporting files from SCM for details. If the dict specifies a target key, an additional subdirectory will be used.


extra_files = [
    ('^.*$', {
        '*': [
            # GPG keys
                "scm": "rpm",
                "repo": "fedora-repos",
                "branch": None,
                "file": [
                "target": "",
            # GPL
                "scm": "git",
                "repo": "https://pagure.io/pungi-fedora",
                "branch": None,
                "file": [
                "target": "",

Extra Files Metadata

If extra files are specified a metadata file, extra_files.json, is placed in the os/ directory and media. The checksums generated are determined by media_checksums option. This metadata file is in the format:

  "header": {"version": "1.0},
  "data": [
      "file": "GPL",
      "checksums": {
        "sha256": "8177f97513213526df2cf6184d8ff986c675afb514d4e68a404010521b880643"
      "size": 18092
      "file": "release-notes/notes.html",
      "checksums": {
        "sha256": "82b1ba8db522aadf101dca6404235fba179e559b95ea24ff39ee1e5d9a53bdcb"
      "size": 1120

CreateISO Settings


createiso_skip = False

(list) – mapping that defines which variants and arches to skip during createiso; format: [(variant_uid_regex, {arch|*: True})]


(list) – mapping that defines maximum expected size for each variant and arch. If the ISO is larger than the limit, a warning will be issued.

Format: [(variant_uid_regex, {arch|*: number})]


(list) – Set the value to True to turn the warning from createiso_max_size into a hard error that will abort the compose. If there are multiple matches in the mapping, the check will be strict if at least one match says so.

Format: [(variant_uid_regex, {arch|*: bool})]

create_jigdo = False

(bool) – controls the creation of jigdo from ISO

create_optional_isos = False

(bool) – when set to True, ISOs will be created even for optional variants. By default only variants with type variant or layered-product will get ISOs.

createiso_break_hardlinks = False

(bool) – when set to True, all files that should go on the ISO and have a hardlink will be first copied into a staging directory. This should work around a bug in genisoimage including incorrect link count in the image, but it is at the cost of having to copy a potentially significant amount of data.

The staging directory is deleted when ISO is successfully created. In that case the same task to create the ISO will not be re-runnable.

createiso_use_xorrisofs = False

(bool) – when set to True, use xorrisofs for creating ISOs instead of genisoimage.

iso_size = 4700000000

(int|str) – size of ISO image. The value should either be an integer meaning size in bytes, or it can be a string with k, M, G suffix (using multiples of 1024).


(int|list) [optional] – Set the ISO9660 conformance level. This is either a global single value (a number from 1 to 4), or a variant/arch mapping.

split_iso_reserve = 10MiB

(int|str) – how much free space should be left on each disk. The format is the same as for iso_size option.

iso_hfs_ppc64le_compatible = True

(bool) – when set to False, the Apple/HFS compatibility is turned off for ppc64le ISOs. This option only makes sense for bootable products, and affects images produced in createiso and extra_isos phases.


Source architecture needs to be listed explicitly. Excluding ‘*’ applies only on binary arches. Jigdo causes significant increase of time to ISO creation.


createiso_skip = [
    ('^Workstation$', {
        '*': True,
        'src': True

Automatic generation of version and release

Version and release values for certain artifacts can be generated automatically based on release version, compose label, date, type and respin. This can be used to shorten the config and keep it the same for multiple uses.

Compose IDLabelVersionDateRespinRelease

All non-RC milestones from label get appended to the version. For release either label is used or date, type and respin.

Common options for Live Images, Live Media and Image Build

All images can have ksurl, version, release and target specified. Since this can create a lot of duplication, there are global options that can be used instead.

For each of the phases, if the option is not specified for a particular deliverable, an option named <PHASE_NAME>_<OPTION> is checked. If that is not specified either, the last fallback is global_<OPTION>. If even that is unset, the value is considered to not be specified.

The kickstart URL is configured by these options.

  • global_ksurl – global fallback setting
  • live_media_ksurl
  • image_build_ksurl
  • live_images_ksurl

Target is specified by these settings.

  • global_target – global fallback setting
  • live_media_target
  • image_build_target
  • live_images_target
  • osbuild_target

Version is specified by these options. If no version is set, a default value will be provided according to automatic versioning.

  • global_version – global fallback setting
  • live_media_version
  • image_build_version
  • live_images_version
  • osbuild_version

Release is specified by these options. If set to a magic value to !RELEASE_FROM_LABEL_DATE_TYPE_RESPIN, a value will be generated according to automatic versioning.

  • global_release – global fallback setting
  • live_media_release
  • image_build_release
  • live_images_release
  • osbuild_release

Each configuration block can also optionally specify a failable key. For live images it should have a boolean value. For live media and image build it should be a list of strings containing architectures that are optional. If any deliverable fails on an optional architecture, it will not abort the whole compose. If the list contains only "*", all arches will be substituted.

Live Images Settings


(list) – Configuration for the particular image. The elements of the list should be tuples (variant_uid_regex, {arch|*: config}). The config should be a dict with these keys:

  • kickstart (str)
  • ksurl (str) [optional] – where to get the kickstart from
  • name (str)
  • version (str)
  • target (str)
  • repo (str|[str]) – repos specified by URL or variant UID
  • specfile (str) – for images wrapped in RPM
  • scratch (bool) – only RPM-wrapped images can use scratch builds, but by default this is turned off
  • type (str) – what kind of task to start in Koji. Defaults to live meaning koji spin-livecd will be used. Alternative option is appliance corresponding to koji spin-appliance.
  • sign (bool) – only RPM-wrapped images can be signed

(bool) – When set to True, filenames generated by Koji will be used. When False, filenames will be generated based on image_name_format configuration option.

Live Media Settings


(dict) – configuration for koji spin-livemedia; format: {variant_uid_regex: [{opt:value}]}

Required options:

  • name (str)
  • version (str)
  • arches ([str]) – what architectures to build the media for; by default uses all arches for the variant.
  • kickstart (str) – name of the kickstart file

Available options:

  • ksurl (str)
  • ksversion (str)
  • scratch (bool)
  • target (str)
  • release (str) – a string with the release, or !RELEASE_FROM_LABEL_DATE_TYPE_RESPIN to automatically generate a suitable value. See automatic versioning for details.
  • skip_tag (bool)
  • repo (str|[str]) – repos specified by URL or variant UID
  • title (str)
  • install_tree_from (str) – variant to take install tree from
  • nomacboot (bool)

Image Build Settings


(dict) – config for koji image-build; format: {variant_uid_regex: [{opt: value}]}

By default, images will be built for each binary arch valid for the variant. The config can specify a list of arches to narrow this down.


Config can contain anything what is accepted by koji image-build --config configfile.ini

Repo can be specified either as a string or a list of strings. It will be automatically transformed into format suitable for koji. A repo for the currently built variant will be added as well.

If you explicitly set release to !RELEASE_FROM_LABEL_DATE_TYPE_RESPIN, it will be replaced with a value generated as described in automatic versioning.

If you explicitly set release to !RELEASE_FROM_DATE_RESPIN, it will be replaced with a value generated as described in automatic versioning.

If you explicitly set version to !VERSION_FROM_VERSION, it will be replaced with a value generated as described in automatic versioning.

Please don’t set install_tree. This gets automatically set by pungi based on current variant. You can use install_tree_from key to use install tree from another variant.

Both the install tree and repos can use one of following formats:

  • URL to the location
  • name of variant in the current compose
  • absolute path on local filesystem (which will be translated using configured mappings or used unchanged, in which case you have to ensure the koji builders can access it)

You can set either a single format, or a list of formats. For available values see help output for koji image-build command.

If ksurl ends with #HEAD, Pungi will figure out the SHA1 hash of current HEAD and use that instead.

Setting scratch to True will run the koji tasks as scratch builds.


image_build = {
    '^Server$': [
            'image-build': {
                'format': ['docker', 'qcow2']
                'name': 'fedora-qcow-and-docker-base',
                'target': 'koji-target-name',
                'ksversion': 'F23',     # value from pykickstart
                'version': '23',
                # correct SHA1 hash will be put into the URL below automatically
                'ksurl': 'https://git.fedorahosted.org/git/spin-kickstarts.git?somedirectoryifany#HEAD',
                'kickstart': "fedora-docker-base.ks",
                'repo': ["http://someextrarepos.org/repo", "ftp://rekcod.oi/repo"],
                'distro': 'Fedora-20',
                'disk_size': 3,

                # this is set automatically by pungi to os_dir for given variant
                # 'install_tree': 'http://somepath',
            'factory-parameters': {
                'docker_cmd':  "[ '/bin/bash' ]",
                'docker_env': "[ 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' ]",
                'docker_labels': "{'Name': 'fedora-docker-base', 'License': u'GPLv2', 'RUN': 'docker run -it --rm ${OPT1} --privileged -v \`pwd\`:/atomicapp -v /run:/run -v /:/host --net=host --name ${NAME} -e NAME=${NAME} -e IMAGE=${IMAGE} ${IMAGE} -v ${OPT2} run ${OPT3} /atomicapp', 'Vendor': 'Fedora Project', 'Version': '23', 'Architecture': 'x86_64' }",
            'image-build': {
                'format': ['docker', 'qcow2']
                'name': 'fedora-qcow-and-docker-base',
                'target': 'koji-target-name',
                'ksversion': 'F23',     # value from pykickstart
                'version': '23',
                # correct SHA1 hash will be put into the URL below automatically
                'ksurl': 'https://git.fedorahosted.org/git/spin-kickstarts.git?somedirectoryifany#HEAD',
                'kickstart': "fedora-docker-base.ks",
                'repo': ["http://someextrarepos.org/repo", "ftp://rekcod.oi/repo"],
                'distro': 'Fedora-20',
                'disk_size': 3,

                # this is set automatically by pungi to os_dir for given variant
                # 'install_tree': 'http://somepath',
            'image-build': {
                'format': 'qcow2',
                'name': 'fedora-qcow-base',
                'target': 'koji-target-name',
                'ksversion': 'F23',     # value from pykickstart
                'version': '23',
                'ksurl': 'https://git.fedorahosted.org/git/spin-kickstarts.git?somedirectoryifany#HEAD',
                'kickstart': "fedora-docker-base.ks",
                'distro': 'Fedora-23',

                # only build this type of image on x86_64
                'arches': ['x86_64']

                # Use install tree and repo from Everything variant.
                'install_tree_from': 'Everything',
                'repo': ['Everything'],

                # Set release automatically.
                'release': '!RELEASE_FROM_LABEL_DATE_TYPE_RESPIN',

OSBuild Composer for building images


(dict) – configuration for building images in OSBuild Composer service fronted by a Koji plugin. Pungi will trigger a Koji task delegating to the OSBuild Composer, which will build the image, import it to Koji via content generators.

Format: {variant_uid_regex: [{...}]}.

Required keys in the configuration dict:

  • name – name of the Koji package
  • distro – image for which distribution should be build TODO examples
  • image_types – a list with a single image type string or just a string representing the image type to build (e.g. qcow2). In any case, only a single image type can be provided as an argument.

Optional keys:

  • target – which build target to use for the task. Either this option or the global osbuild_target is required.
  • version – version for the final build (as a string). This option is required if the global osbuild_version is not specified.
  • release – release part of the final NVR. If neither this option nor the global osbuild_release is set, Koji will automatically generate a value.
  • repo – a list of repositories from which to consume packages for building the image. By default only the variant repository is used. The list items may use one of the following formats:

    • String with just the repository URL.
    • Dictionary with the following keys:

      • baseurl – URL of the repository.
      • package_sets – a list of package set names to use for this

        repository. Package sets are an internal concept of Image Builder and are used in image definitions. If specified, the repository is used by Image Builder only for the pipeline with the same name. For example, specifying the build package set name will make the repository to be used only for the build environment in which the image will be built. (optional)

  • arches – list of architectures for which to build the image. By default, the variant arches are used. This option can only restrict it, not add a new one.
  • manifest_type – the image type that is put into the manifest by pungi. If not supplied then it is autodetected from the Koji output.
  • ostree_url – URL of the repository that’s used to fetch the parent commit from.
  • ostree_ref – name of the ostree branch
  • ostree_parent – commit hash or a a branch-like reference to the parent commit.
  • upload_options – a dictionary with upload options specific to the target cloud environment. If provided, the image will be uploaded to the cloud environment, in addition to the Koji server. One can’t combine arbitrary image types with arbitrary upload options. The dictionary keys differ based on the target cloud environment. The following keys are supported:

    • AWS EC2 upload options – upload to Amazon Web Services.

      • region – AWS region to upload the image to
      • share_with_accounts – list of AWS account IDs to share the image with
      • snapshot_name – Snapshot name of the uploaded EC2 image (optional)
    • AWS S3 upload options – upload to Amazon Web Services S3.

      • region – AWS region to upload the image to
    • Azure upload options – upload to Microsoft Azure.

      • tenant_id – Azure tenant ID to upload the image to
      • subscription_id – Azure subscription ID to upload the image to
      • resource_group – Azure resource group to upload the image to
      • location – Azure location of the resource group (optional)
      • image_name – Image name of the uploaded Azure image (optional)
    • GCP upload options – upload to Google Cloud Platform.

      • region – GCP region to upload the image to
      • bucket – GCP bucket to upload the image to (optional)
      • share_with_accounts – list of GCP accounts to share the image with
      • image_name – Image name of the uploaded GCP image (optional)
    • Container upload options – upload to a container registry.

      • name – name of the container image (optional)
      • tag – container tag to upload the image to (optional)

There is initial support for having this task as failable without aborting the whole compose. This can be enabled by setting "failable": ["*"] in the config for the image. It is an on/off switch without granularity per arch.

Image container

This phase supports building containers in Osbs that embed an image created in the same compose. This can be useful for delivering the image to users running in containerized environments.

Pungi will start a buildContainer task in Koji with configured source repository. The Dockerfile can expect that a repo file will be injected into the container that defines a repo named image-to-include, and its baseurl will point to the image to include. It is possible to extract the URL with a command like dnf config-manager --dump image-to-include | awk '/baseurl =/{print $3}'`


(dict) – configuration for building containers embedding an image.

Format: {variant_uid_regex: [{...}]}.

The inner object will define a single container. These keys are required:

  • url, target, git_branch. See Osbs section for definition of these.
  • image_spec – (object) A string mapping of filters used to select the image to embed. All images listed in metadata for the variant will be processed. The keys of this filter are used to select metadata fields for the image, and values are regular expression that need to match the metadata value.

    The filter should match exactly one image.

Example config

image_container = {
    "^Server$": [{
        "url": "git://example.com/dockerfiles.git?#HEAD",
        "target": "f24-container-candidate",
        "git_branch": "f24",
        "image_spec": {
            "format": "qcow2",
            "arch": "x86_64",
            "path": ".*/guest-image-.*$",

OSTree Settings

The ostree phase of Pungi can create and update ostree repositories. This is done by running rpm-ostree compose in a Koji runroot environment. The ostree repository itself is not part of the compose and should be located in another directory. Any new packages in the compose will be added to the repository with a new commit.


(dict) – a mapping of configuration for each. The format should be {variant_uid_regex: config_dict}. It is possible to use a list of configuration dicts as well.

The configuration dict for each variant arch pair must have these keys:

  • treefile – (str) Filename of configuration for rpm-ostree.
  • config_url – (str) URL for Git repository with the treefile.
  • repo – (str|dict|[str|dict]) repos specified by URL or variant UID or a dict of repo options, baseurl is required in the dict.
  • ostree_repo – (str) Where to put the ostree repository

These keys are optional:

  • keep_original_sources – (bool) Keep the existing source repos in the tree config file. If not enabled, all the original source repos will be removed from the tree config file.
  • config_branch – (str) Git branch of the repo to use. Defaults to master.
  • arches – ([str]) List of architectures for which to update ostree. There will be one task per architecture. By default all architectures in the variant are used.
  • failable – ([str]) List of architectures for which this deliverable is not release blocking.
  • update_summary – (bool) Update summary metadata after tree composing. Defaults to False.
  • force_new_commit – (bool) Do not use rpm-ostree’s built-in change detection. Defaults to False.
  • unified_core – (bool) Use rpm-ostree in unified core mode for composes. Defaults to False.
  • version – (str) Version string to be added as versioning metadata. If this option is set to !OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN, a value will be generated automatically as $VERSION.$RELEASE. If this option is set to !VERSION_FROM_VERSION_DATE_RESPIN, a value will be generated automatically as $VERSION.$DATE.$RESPIN. See how those values are created.
  • tag_ref – (bool, default True) If set to False, a git reference will not be created.
  • ostree_ref – (str) To override value ref from treefile.
  • runroot_packages – (list) A list of additional package names to be installed in the runroot environment in Koji.

Example config

ostree = {
    "^Atomic$": {
        "treefile": "fedora-atomic-docker-host.json",
        "config_url": "https://git.fedorahosted.org/git/fedora-atomic.git",
        "repo": [
            {"baseurl": "Everything"},
            {"baseurl": "http://example.com/linux/repo", "exclude": "systemd-container"},
        "keep_original_sources": True,
        "ostree_repo": "/mnt/koji/compose/atomic/Rawhide/",
        "update_summary": True,
        # Automatically generate a reasonable version
        # Only run this for x86_64 even if Atomic has more arches
        "arches": ["x86_64"],
ostree_use_koji_plugin = False

(bool) – When set to True, the Koji pungi_ostree task will be used to execute rpm-ostree instead of runroot. Use only if the Koji instance has the pungi_ostree plugin installed.

Ostree Installer Settings

The ostree_installer phase of Pungi can produce installer image bundling an OSTree repository. This always runs in Koji as a runroot task.


(dict) – a variant/arch mapping of configuration. The format should be [(variant_uid_regex, {arch|*: config_dict})].

The configuration dict for each variant arch pair must have this key:

These keys are optional:

  • repo – (str|[str]) repos specified by URL or variant UID
  • release – (str) Release value to set for the installer image. Set to !RELEASE_FROM_LABEL_DATE_TYPE_RESPIN to generate the value automatically.
  • failable – ([str]) List of architectures for which this deliverable is not release blocking.

These optional keys are passed to lorax to customize the build.

  • installpkgs – ([str])
  • add_template – ([str])
  • add_arch_template – ([str])
  • add_template_var – ([str])
  • add_arch_template_var – ([str])
  • rootfs_size – ([str])
  • template_repo – (str) Git repository with extra templates.
  • template_branch – (str) Branch to use from template_repo.

The templates can either be absolute paths, in which case they will be used as configured; or they can be relative paths, in which case template_repo needs to point to a Git repository from which to take the templates.

If the templates need to run with additional dependencies, that can be configured with the optional key:

  • extra_runroot_pkgs – ([str])
  • skip_branding – (bool) Stops lorax to install packages with branding. Defaults to False.
ostree_installer_overwrite = False

(bool) – by default if a variant including OSTree installer also creates regular installer images in buildinstall phase, there will be conflicts (as the files are put in the same place) and Pungi will report an error and fail the compose.

With this option it is possible to opt-in for the overwriting. The traditional boot.iso will be in the iso/ subdirectory.

ostree_installer_use_koji_plugin = False

(bool) – When set to True, the Koji pungi_buildinstall task will be used to execute Lorax instead of runroot. Use only if the Koji instance has the pungi_buildinstall plugin installed.

Example config

ostree_installer = [
    ("^Atomic$", {
        "x86_64": {
            "repo": [
            "release": "!RELEASE_FROM_LABEL_DATE_TYPE_RESPIN",
            "installpkgs": ["fedora-productimg-atomic"],
            "add_template": ["atomic-installer/lorax-configure-repo.tmpl"],
            "add_template_var": [
            "add_arch_template": ["atomic-installer/lorax-embed-repo.tmpl"],
            "add_arch_template_var": [
            'template_repo': 'https://git.fedorahosted.org/git/spin-kickstarts.git',
            'template_branch': 'f24',

OSBS Settings

Pungi can build container images in Osbs. The build is initiated through Koji container-build plugin. The base image will be using RPMs from the current compose and a Dockerfile from specified Git repository.

Please note that the image is uploaded to a registry and not exported into compose directory. There will be a metadata file in compose/metadata/osbs.json with details about the built images (assuming they are not scratch builds).


(dict) – a mapping from variant regexes to configuration blocks. The format should be {variant_uid_regex: [config_dict]}.

The configuration for each image must have at least these keys:

  • url – (str) URL pointing to a Git repository with Dockerfile. Please see Git URLs section for more details.
  • target – (str) A Koji target to build the image for.
  • git_branch – (str) A branch in SCM for the Dockerfile. This is required by Osbs to avoid race conditions when multiple builds from the same repo are submitted at the same time. Please note that url should contain the branch or tag name as well, so that it can be resolved to a particular commit hash.

Optionally you can specify failable. If it has a truthy value, failure to create the image will not abort the whole compose.

The configuration will pass other attributes directly to the Koji task. This includes scratch and priority. See koji list-api buildContainer for more details about these options.

A value for yum_repourls will be created automatically and point at a repository in the current compose. You can add extra repositories with repo key having a list of urls pointing to .repo files or just variant uid, Pungi will create the .repo file for that variant. If specific URL is used in the repo, the $COMPOSE_ID variable in the repo string will be replaced with the real compose ID. gpgkey can be specified to enable gpgcheck in repo files for variants.


(dict) – Use this optional setting to emit osbs-request-push messages for each non-scratch container build. These messages can guide other tools how to push the images to other registries. For example, an external tool might trigger on these messages and copy the images from Osbs’s registry to a staging or production registry.

For each completed container build, Pungi will try to match the NVR against a key in osbs_registries mapping (using shell-style globbing) and take the corresponding value and collect them across all built images. Pungi will save this data into logs/global/osbs-registries.json, mapping each Koji NVR to the registry data. Pungi will also send this data to the message bus on the osbs-request-push topic once the compose finishes successfully.

Pungi simply logs the mapped data and emits the messages. It does not handle the messages or push images. A separate tool must do that.

Example config

osbs = {
    "^Server$": {
        # required
        "url": "git://example.com/dockerfiles.git?#HEAD",
        "target": "f24-docker-candidate",
        "git_branch": "f24-docker",

        # optional
        "repo": ["Everything", "https://example.com/extra-repo.repo"],
        # This will result in three repo urls being passed to the task.
        # They will be in this order: Server, Everything, example.com/
        "gpgkey": 'file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release',

Extra ISOs

Create an ISO image that contains packages from multiple variants. Such ISO always belongs to one variant, and will be stored in ISO directory of that variant.

The ISO will be bootable if buildinstall phase runs for the parent variant. It will reuse boot configuration from that variant.


(dict) – a mapping from variant UID regex to a list of configuration blocks.

  • include_variants – (list) list of variant UIDs from which content should be added to the ISO; the variant of this image is added automatically.

Rest of configuration keys is optional.

  • filename – (str) template for naming the image. In addition to the regular placeholders filename is available with the name generated using image_name_format option.
  • volid – (str) template for generating volume ID. Again volid placeholder can be used similarly as for file name. This can also be a list of templates that will be tried sequentially until one generates a volume ID that fits into 32 character limit.
  • extra_files – (list) a list of scm_dict objects. These files will be put in the top level directory of the image.
  • arches – (list) a list of architectures for which to build this image. By default all arches from the variant will be used. This option can be used to limit them.
  • failable_arches – (list) a list of architectures for which the image can fail to be generated and not fail the entire compose.
  • skip_src – (bool) allows to disable creating an image with source packages.
  • inherit_extra_files – (bool) by default extra files in variants are ignored. If you want to include them in the ISO, set this option to True.
  • max_size – (int) expected maximum size in bytes. If the final image is larger, a warning will be issued.

Example config

extra_isos = {
    'Server': [{
        # Will generate foo-DP-1.0-20180510.t.43-Server-x86_64-dvd1.iso
        'filename': 'foo-{filename}',
        'volid': 'foo-{arch}',

        'extra_files': [{
            'scm': 'git',
            'repo': 'https://pagure.io/pungi.git',
            'file': 'setup.py'

        'include_variants': ['Client']
# This should create image with the following layout:
#  .
#  ├── Client
#  │   ├── Packages
#  │   │   ├── a
#  │   │   └── b
#  │   └── repodata
#  ├── Server
#  │   ├── Packages
#  │   │   ├── a
#  │   │   └── b
#  │   └── repodata
#  └── setup.py

Media Checksums Settings


(list) – list of checksum types to compute, allowed values are anything supported by Python’s hashlib module (see documentation for details).


(bool) – when True, only one CHECKSUM file will be created per directory; this option requires media_checksums to only specify one type


(str) – when not set, all checksums will be save to a file named either CHECKSUM or based on the digest type; this option allows adding any prefix to that name

It is possible to use format strings that will be replace by actual values. The allowed keys are:

  • arch
  • compose_id
  • date
  • label
  • label_major_version
  • release_short
  • respin
  • type
  • type_suffix
  • version
  • dirname (only if media_checksum_one_file is enabled)

For example, for Fedora the prefix should be %(release_short)s-%(variant)s-%(version)s-%(date)s%(type_suffix)s.%(respin)s.

Translate Paths Settings


(list) – list of paths to translate; format: [(path, translated_path)]


This feature becomes useful when you need to transform compose location into e.g. a HTTP repo which is can be passed to koji image-build. The path part is normalized via os.path.normpath().

Example config

translate_paths = [
    ("/mnt/a", "http://b/dir"),

Example usage

>>> from pungi.util import translate_paths
>>> print translate_paths(compose_object_with_mapping, "/mnt/a/c/somefile")

Miscellaneous Settings


(str) – Name of Python module implementing the same interface as pungi.paths. This module can be used to override where things are placed.

link_type = hardlink-or-copy

(str) – Method of putting packages into compose directory.

Available options:

  • hardlink-or-copy
  • hardlink
  • copy
  • symlink
  • abspath-symlink

(list) – List of phase names that should be skipped. The same functionality is available via a command line option.


(str) – Override description in .discinfo files. The value is a format string accepting %(variant_name)s and %(arch)s placeholders.


(str) – If set, the ISO files from buildinstall, createiso and live_images phases will be put into this destination, and a symlink pointing to this location will be created in actual compose directory.


(str) – If set, Pungi will use the configured Dogpile cache backend to cache various data between multiple Pungi calls. This can make Pungi faster in case more similar composes are running regularly in short time.

For list of available backends, please see the https://dogpilecache.readthedocs.io documentation.

Most typical configuration uses the dogpile.cache.dbm backend.


(dict) – Arguments to be used when creating the Dogpile cache backend. See the particular backend’s configuration for the list of possible key/value pairs.

For the dogpile.cache.dbm backend, the value can be for example following:

    "filename": "/tmp/pungi_cache_file.dbm"

(int) – Defines the default expiration time in seconds of data stored in the Dogpile cache. Defaults to 3600 seconds.

Big Picture Examples

Actual Pungi configuration files can get very large. This pages brings two examples of (almost) full configuration for two different composes.

Fedora Rawhide compose

This is a shortened configuration for Fedora Radhide compose as of 2019-10-14.

release_name = 'Fedora'
release_short = 'Fedora'
release_version = 'Rawhide'
release_is_layered = False

bootable = True
comps_file = {
    'scm': 'git',
    'repo': 'https://pagure.io/fedora-comps.git',
    'branch': 'master',
    'file': 'comps-rawhide.xml',
    # Merge translations by running make. This command will generate the file.
    'command': 'make comps-rawhide.xml'
module_defaults_dir = {
    'scm': 'git',
    'repo': 'https://pagure.io/releng/fedora-module-defaults.git',
    'branch': 'main',
    'dir': '.'
# Optional module obsoletes configuration which is merged
# into the module index and gets resolved
module_obsoletes_dir = {
    'scm': 'git',
    'repo': 'https://pagure.io/releng/fedora-module-defaults.git',
    'branch': 'main',
    'dir': 'obsoletes'

sigkeys = ['12C944D0']

# Put packages into subdirectories hashed by their initial letter.
hashed_directories = True

# There is a special profile for use with compose. It makes Pungi
# authenticate automatically as rel-eng user.
koji_profile = 'compose_koji'

# RUNROOT settings
runroot = True
runroot_channel = 'compose'
runroot_tag = 'f32-build'

pkgset_source = 'koji'
pkgset_koji_tag = 'f32'
pkgset_koji_inherit = False

filter_system_release_packages = False

gather_method = {
    '^.*': {                # For all variants
        'comps': 'deps',    # resolve dependencies for packages from comps file
        'module': 'nodeps', # but not for packages from modules
gather_backend = 'dnf'
gather_profiler = True
check_deps = False
greedy_method = 'build'

repoclosure_backend = 'dnf'

createrepo_deltas = False
createrepo_database = True
createrepo_use_xz = True
createrepo_extra_args = ['--zck', '--zck-dict-dir=/usr/share/fedora-repo-zdicts/rawhide']

media_checksums = ['sha256']
media_checksum_one_file = True
media_checksum_base_filename = '%(release_short)s-%(variant)s-%(version)s-%(arch)s-%(date)s%(type_suffix)s.%(respin)s'

iso_hfs_ppc64le_compatible = False

buildinstall_method = 'lorax'
buildinstall_skip = [
    # No installer for Modular variant
    ('^Modular$', {'*': True}),
    # No 32 bit installer for Everything.
    ('^Everything$', {'i386': True}),

# Enables macboot on x86_64 for all variants and disables upgrade image building
# everywhere.
lorax_options = [
  ('^.*$', {
     'x86_64': {
         'nomacboot': False
     'ppc64le': {
         # Use 3GB image size for ppc64le.
         'rootfs_size': 3
     '*': {
         'noupgrade': True

additional_packages = [
    ('^(Server|Everything)$', {
        '*': [
            # Add all architectures of dracut package.
            # All all packages matching this pattern

    ('^Everything$', {
        # Everything should include all packages from the tag. This only
        # applies to the native arch. Multilib will still be pulled in
        # according to multilib rules.
        '*': ['*'],

filter_packages = [
    ("^.*$", {"*": ["glibc32", "libgcc32"]}),
    ('(Server)$', {
        '*': [

multilib = [
    ('^Everything$', {
        'x86_64': ['devel', 'runtime'],

# These packages should never be multilib on any arch.
multilib_blacklist = {
    '*': [
        'kernel', 'kernel-PAE*', 'kernel*debug*', 'java-*', 'php*', 'mod_*', 'ghc-*'

# These should be multilib even if they don't match the rules defined above.
multilib_whitelist = {
    '*': ['wine', '*-static'],

createiso_skip = [
    # Keep binary ISOs for Server, but not source ones.
    ('^Server$', {'src': True}),

    # Remove all other ISOs.
    ('^Everything$', {'*': True, 'src': True}),
    ('^Modular$', {'*': True, 'src': True}),

# Image name respecting Fedora's image naming policy
image_name_format = '%(release_short)s-%(variant)s-%(disc_type)s-%(arch)s-%(version)s-%(date)s%(type_suffix)s.%(respin)s.iso'
# Use the same format for volume id
image_volid_formats = [
# Used by Pungi to replace 'Cloud' with 'C' (etc.) in ISO volume IDs.
# There is a hard 32-character limit on ISO volume IDs, so we use
# these to try and produce short enough but legible IDs. Note this is
# duplicated in Koji for live images, as livemedia-creator does not
# allow Pungi to tell it what volume ID to use. Note:
# https://fedoraproject.org/wiki/User:Adamwill/Draft_fedora_image_naming_policy
volume_id_substitutions = {
                 'Beta': 'B',
              'Rawhide': 'rawh',
           'Silverblue': 'SB',
             'Cinnamon': 'Cinn',
                'Cloud': 'C',
         'Design_suite': 'Dsgn',
       'Electronic_Lab': 'Elec',
           'Everything': 'E',
       'Scientific_KDE': 'SciK',
             'Security': 'Sec',
               'Server': 'S',
          'Workstation': 'WS',

disc_types = {
    'boot': 'netinst',
    'live': 'Live',

translate_paths = [
   ('/mnt/koji/compose/', 'https://kojipkgs.fedoraproject.org/compose/'),

# These will be inherited by live_media, live_images and image_build
global_ksurl = 'git+https://pagure.io/fedora-kickstarts.git?#HEAD'
global_version = 'Rawhide'
# live_images ignores this in favor of live_target
global_target = 'f32'

image_build = {
    '^Container$': [
            'image-build': {
                    'format': [('docker', 'tar.xz')],
                    'name': 'Fedora-Container-Base',
                    'kickstart': 'fedora-container-base.ks',
                    'distro': 'Fedora-22',
                    'disk_size': 5,
                    'arches': ['armhfp', 'aarch64', 'ppc64le', 's390x', 'x86_64'],
                    'repo': 'Everything',
                    'install_tree_from': 'Everything',
                    'subvariant': 'Container_Base',
                    'failable': ['*'],
            'factory-parameters': {
                'dockerversion': "1.10.1",
                'docker_cmd':  '[ "/bin/bash" ]',
                'docker_env': '[ "DISTTAG=f32container", "FGC=f32", "container=oci" ]',
                'docker_label': '{ "name": "fedora", "license": "MIT", "vendor": "Fedora Project", "version": "32"}',

live_media = {
    '^Workstation$': [
                'name': 'Fedora-Workstation-Live',
                'kickstart': 'fedora-live-workstation.ks',
                # Variants.xml also contains aarch64 and armhfp, but there
                # should be no live media for those arches.
                'arches': ['x86_64', 'ppc64le'],
                'failable': ['ppc64le'],
                # Take packages and install tree from Everything repo.
                'repo': 'Everything',
                'install_tree_from': 'Everything',
    '^Spins': [
        # There are multiple media for Spins variant. They use subvariant
        # field so that they can be identified in the metadata.
            'name': 'Fedora-KDE-Live',
            'kickstart': 'fedora-live-kde.ks',
            'arches': ['x86_64'],
            'repo': 'Everything',
            'install_tree_from': 'Everything',
            'subvariant': 'KDE'

            'name': 'Fedora-Xfce-Live',
            'kickstart': 'fedora-live-xfce.ks',
            'arches': ['x86_64'],
            'failable': ['*'],
            'repo': 'Everything',
            'install_tree_from': 'Everything',
            'subvariant': 'Xfce'

failable_deliverables = [
    # Installer and ISOs for server failing do not abort the compose.
    ('^Server$', {
        '*': ['buildinstall', 'iso'],
    ('^.*$', {
        # Buildinstall is not blocking
        'src': ['buildinstall'],
        # Nothing on i386, ppc64le blocks the compose
        'i386': ['buildinstall', 'iso'],
        'ppc64le': ['buildinstall', 'iso'],
        's390x': ['buildinstall', 'iso'],

live_target = 'f32'
live_images_no_rename = True
live_images = [
    ('^Workstation$', {
        'armhfp': {
            'kickstart': 'fedora-arm-workstation.ks',
            'name': 'Fedora-Workstation-armhfp',
            # Again workstation takes packages from Everything.
            'repo': 'Everything',
            'type': 'appliance',
            'failable': True,
    ('^Server$', {
        # But Server has its own repo.
        'armhfp': {
            'kickstart': 'fedora-arm-server.ks',
            'name': 'Fedora-Server-armhfp',
            'type': 'appliance',
            'failable': True,

ostree = {
    "^Silverblue$": {
        # To get config, clone master branch from this repo and take
        # treefile from there.
        "treefile": "fedora-silverblue.yaml",
        "config_url": "https://pagure.io/workstation-ostree-config.git",
        "config_branch": "master",
        # Consume packages from Everything
        "repo": "Everything",
        # Don't create a reference in the ostree repo (signing automation does that).
        "tag_ref": False,
        # Don't use change detection in ostree.
        "force_new_commit": True,
        # Use unified core mode for rpm-ostree composes
        "unified_core": True,
        # This is the location for the repo where new commit will be
        # created. Note that this is outside of the compose dir.
        "ostree_repo": "/mnt/koji/compose/ostree/repo/",
        "ostree_ref": "fedora/rawhide/${basearch}/silverblue",
        "arches": ["x86_64", "ppc64le", "aarch64"],
        "failable": ['*'],

ostree_installer = [
    ("^Silverblue$", {
        "x86_64": {
            "repo": "Everything",
            "release": None,
            "rootfs_size": "8",
            # Take templates from this repository.
            'template_repo': 'https://pagure.io/fedora-lorax-templates.git',
            'template_branch': 'master',
            # Use following templates.
            "add_template": ["ostree-based-installer/lorax-configure-repo.tmpl",
            # And add these variables for the templates.
            "add_template_var": [
                "flatpak_remote_refs=runtime/org.fedoraproject.Platform/x86_64/f30 app/org.gnome.Baobab/x86_64/stable",
            'failable': ['*'],

RCM Tools compose

This is a small compose used to deliver packages to Red Hat internal users. The configuration is split into two files.

# rcmtools-common.conf

release_name = "RCM Tools"
release_short = "RCMTOOLS"
release_version = "2.0"
release_type = "updates"
release_is_layered = True
createrepo_c = True
createrepo_checksum = "sha256"

pkgset_source = "koji"
koji_profile = "brew"
pkgset_koji_inherit = True

bootable = False
comps_file = "rcmtools-comps.xml"
variants_file = "rcmtools-variants.xml"
sigkeys = ["3A3A33A3"]

# RUNROOT settings
runroot = False

gather_method = "deps"
check_deps = True

additional_packages = [
    ('.*', {
        '*': ['puddle', 'rcm-nexus'],

# Set repoclosure_strictness to fatal to avoid installation dependency
# issues in production composes
repoclosure_strictness = [
    ("^.*$", {
        "*": "fatal"

Configuration specific for different base products is split into separate files.

# rcmtools-common.conf
from rcmtools-common import *

base_product_name = "Red Hat Enterprise Linux"
base_product_short = "RHEL"
base_product_version = "7"

pkgset_koji_tag = "rcmtools-rhel-7-compose"

# remove i386 arch on rhel7
tree_arches = ["aarch64", "ppc64le", "s390x", "x86_64"]

check_deps = False

# Packages in these repos are available to satisfy dependencies inside the
# compose, but will not be pulled in.
gather_lookaside_repos = [
    ("^Client|Client-optional$", {
        "x86_64": [
     ("^Workstation|Workstation-optional$", {
        "x86_64": [
    ("^Server|Server-optional$", {
        "aarch64": [
        "ppc64": [
        "ppc64le": [
        "s390x": [
        "x86_64": [

Exporting Files from SCM

Multiple places in Pungi can use files from external storage. The configuration is similar independently of the backend that is used, although some features may be different.

The so-called scm_dict is always put into configuration as a dictionary, which can contain following keys.

Koji examples

There are two different ways how to configure the Koji backend.

    # Download all *.tar files from build my-image-1.0-1.
    "scm": "koji",
    "repo": "my-image-1.0-1",
    "file": "*.tar",

    # Find latest build of my-image in tag my-tag and take files from
    # there.
    "scm": "koji",
    "repo": "my-image",
    "branch": "my-tag",
    "file": "*.tar",

Using both tag name and exact NVR will result in error: the NVR would be interpreted as a package name, and would not match anything.

file vs. dir

Exactly one of these two options has to be specified. Documentation for each configuration option should specify whether it expects a file or a directory.

For extra_files phase either key is valid and should be chosen depending on what the actual use case.


The rpm backend can only be used in phases that would extract the files after pkgset phase finished. You can’t get comps file from a package.

Depending on Git repository URL configuration Pungi can only export the requested content using git archive. When a command should run this is not possible and a clone is always needed.

When using koji backend, it is required to provide configuration for Koji profile to be used (koji_profile). It is not possible to contact multiple different Koji instances.

Progress Notification

Pungi has the ability to emit notification messages about progress and general status of the compose. These can be used to e.g. send messages to fedmsg. This is implemented by actually calling a separate script.

The script will be called with one argument describing action that just happened. A JSON-encoded object will be passed to standard input to provide more information about the event. At the very least, the object will contain a compose_id key.

The notification script inherits working directory from the parent process and it can be called from the same directory pungi-koji is called from. The working directory is listed at the start of main log.

Currently these messages are sent:

For phase related messages phase_name key is provided as well.

A pungi-fedmsg-notification script is provided and understands this interface.

Setting it up

The script should be provided as a command line argument --notification-script.


Gathering Packages

A compose created by Pungi consists of one or more variants. A variant contains a subset of the content targeted at a particular use case.

There are different types of variants. The type affects how packages are gathered into the variant.

The inputs for gathering are defined by various gather sources. Packages from all sources are collected to create a big list of package names, comps groups names and a list of packages that should be filtered out.


The inputs for both explicit package list and comps file are interpreted as RPM names, not any arbitrary provides nor source package name.

Next, gather_method defines how the list is processed. For nodeps, the results from source are used pretty much as is [1]. For deps method, a process will be launched to figure out what dependencies are needed and those will be pulled in.


The lists are filtered based on what packages are available in the package set, but nothing else will be pulled in.

Variant types


is a base type that has no special behaviour.


is built on top of a regular variant. Any packages that should go to both the addon and its parent will be removed from addon. Packages that are only in addon but pulled in because of gather_fulltree option will be moved to parent.

Integrated Layered Product

works similarly to addon. Additionally, all packages from addons on the same parent variant are removed integrated layered products.

The main difference between an addon and integrated layered product is that integrated layered product has its own identity in the metadata (defined with product name and version).


There’s also Layered Product as a term, but this is not related to variants. It’s used to describe a product that is not a standalone operating system and is instead meant to be used on some other base system.


contains packages that complete the base variants’ package set. It always has fulltree and selfhosting enabled, so it contains build dependencies and packages which were not specifically requested for base variant.

Some configuration options are overridden for particular variant types.

Depsolving configuration



Profiling data on the pungi-gather tool can be enabled by setting the gather_profiler configuration option to True.

Modular compose

A compose with gather_source set to module is called modular. The package list is determined by a list of modules.

The list of modules that will be put into a variant is defined in the variants.xml file. The file can contain either Name:Stream or Name:Stream:Version references. See Module Naming Policy for details. When Version is missing from the specification, Pungi will ask PDC for the latest one.

The module metadata in PDC contains a list of RPMs in the module as well as Koji tag from which the packages can be retrieved.


  • A modular compose must always use Koji as a package set source.

Getting Data from Koji

When Pungi is configured to get packages from a Koji tag, it somehow needs to access the actual RPM files.

Historically, this required the storage used by Koji to be directly available on the host where Pungi was running. This was usually achieved by using NFS for the Koji volume, and mounting it on the compose host.

The compose could be created directly on the same volume. In such case the packages would be hardlinked, significantly reducing space consumption.

The compose could also be created on a different storage, in which case the packages would either need to be copied over or symlinked. Using symlinks requires that anything that accesses the compose (e.g. a download server) would also need to mount the Koji volume in the same location.

There is also a risk with symlinks that the package in Koji can change (due to being resigned for example), which would invalidate composes linking to it.

Using Koji without direct mount

It is possible now to run a compose from a Koji tag without direct access to Koji storage.

Pungi can download the packages over HTTP protocol, store them in a local cache, and consume them from there.

The local cache has similar structure to what is on the Koji volume.

When Pungi needs some package, it has a path on Koji volume. It will replace the topdir with the cache location. If such file exists, it will be used. If it doesn’t exist, it will be downloaded from Koji (by replacing the topdir with topurl).

Koji path                            /mnt/koji/packages/foo/1/1.fc38/data/signed/abcdef/noarch/foo-1-1.fc38.noarch.rpm
Koji URL    https://kojipkgs.fedoraproject.org/packages/foo/1/1.fc38/data/signed/abcdef/noarch/foo-1-1.fc38.noarch.rpm
Local path                  /mnt/compose/cache/packages/foo/1/1.fc38/data/signed/abcdef/noarch/foo-1-1.fc38.noarch.rpm

The packages can be hardlinked from this cache directory.


While the approach above allows each RPM to be downloaded only once, it will eventually result in the Koji volume being mirrored locally. Most of the packages will however no longer be needed.

There is a script pungi-cache-cleanup that can help with that. It can find and remove files from the cache that are no longer needed.

A file is no longer needed if it has a single link (meaning it is only in the cache, not in any compose), and it has mtime older than a given threshold.

It doesn’t make sense to delete files that are hardlinked in an existing compose as it would not save any space anyway.

The mtime check is meant to preserve files that are downloaded but not actually used in a compose, like a subpackage that is not included in any variant. Every time its existence in the local cache is checked, the mtime is updated.

Race conditions?

It should be safe to have multiple compose hosts share the same storage volume for generated composes and local cache.

If a cache file is accessed and it exists, there’s no risk of race condition.

If two composes need the same file at the same time and it is not present yet, one of them will take a lock on it and start downloading. The other will wait until the download is finished.

The lock is only valid for a set amount of time (5 minutes) to avoid issues where the downloading process is killed in a way that blocks it from releasing the lock.

If the file is large and network slow, the limit may not be enough finish downloading. In that case the second process will steal the lock while the first process is still downloading. This will result in the same file being downloaded twice.

When the first process finishes the download, it will put the file into the local cache location. When the second process finishes, it will atomically replace it, but since it’s the same file it will be the same file.

If the first compose already managed to hardlink the file before it gets replaced, there will be two copies of the file present locally.

Integrity checking

There is minimal integrity checking. RPM packages belonging to real builds will be check to match the checksum provided by Koji hub.

There is no checking for scratch builds or any images.

Processing Comps Files

The comps file that Pungi takes as input is not really pure comps as used by tools like DNF. There are extensions used to customize how the file is processed.

The first step of Pungi processing is to retrieve the actual file. This can use anything that Exporting files from SCM supports.

Pungi extensions are arch attribute on packageref, group and environment tags. The value of this attribute is a comma separated list of architectures.

Second step Pungi performs is creating a file for each architecture. This is done by removing all elements with incompatible arch attribute. No additional clean up is performed on this file. The resulting file is only used internally for the rest of the compose process.

Third and final step is to create comps file for each Variant.Arch combination. This is the actual file that will be included in the compose. The start file is the original input file, from which all elements with incompatible architecture are removed. Then clean up is performed by removing all empty groups, removing non-existing groups from environments and categories and finally removing empty environments and categories. As a last step groups not listed in the variants file are removed.

Contributing to Pungi

Set up development environment

In order to work on Pungi, you should install recent version of Fedora.


Fedora 29 is recommended because some packages are not available in newer Fedora release, e.g. python2-libcomps.

Install required packages

$ sudo dnf install -y krb5-devel gcc make libcurl-devel python2-devel python2-createrepo_c kobo-rpmlib yum python2-libcomps python2-libselinx


Install required packages

$ sudo dnf install -y krb5-devel gcc make libcurl-devel python3-devel python3-createrepo_c python3-libcomps


Currently the development workflow for Pungi is on master branch:

  • Make your own fork at https://pagure.io/pungi
  • Clone your fork locally (replacing $USERNAME with your own):

    git clone git@pagure.io:forks/$USERNAME/pungi.git
  • cd into your local clone and add the remote upstream for rebasing:

    cd pungi
    git remote add upstream git@pagure.io:pungi.git

    This workflow assumes that you never git commit directly to the master branch of your fork. This will make more sense when we cover rebasing below.

  • create a topic branch based on master:

    git branch my_topic_branch master
    git checkout my_topic_branch
  • Make edits, changes, add new features, etc. and then make sure to pull from upstream master and rebase before submitting a pull request:

    # lets just say you edited setup.py for sake of argument
    git checkout my_topic_branch
    # make changes to setup.py
    black setup.py
    git add setup.py
    git commit -s -m "added awesome feature to setup.py"
    # now we rebase
    git checkout master
    git pull --rebase upstream master
    git push origin master
    git push origin --tags
    git checkout my_topic_branch
    git rebase master
    # resolve merge conflicts if any as a result of your development in
    # your topic branch
    git push origin my_topic_branch

    In order to for your commit to be merged:

    • you must sign-off on it. Use -s option when running git commit.
    • The code must be well formatted via black and pass flake8 checking. Run tox -e black,flake8 to do the check.
  • Create pull request in the pagure.io web UI
  • For convenience, here is a bash shell function that can be placed in your ~/.bashrc and called such as pullupstream pungi-4-devel that will automate a large portion of the rebase steps from above:

    pullupstream () {
      if [[ -z "$1" ]]; then
        printf "Error: must specify a branch name (e.g. - master, devel)\n"
        pullup_startbranch=$(git describe --contains --all HEAD)
        git checkout $1
        git pull --rebase upstream master
        git push origin $1
        git push origin --tags
        git checkout ${pullup_startbranch}


You must write unit tests for any new code (except for trivial changes). Any code without sufficient test coverage may not be merged.

To run all existing tests, suggested method is to use tox.

$ sudo dnf install python3-tox -y

$ tox -e py3
$ tox -e py27

Alternatively you could create a vitualenv, install deps and run tests manually if you don’t want to use tox.

$ sudo dnf install python3-virtualenvwrapper -y
$ mkvirtualenv --system-site-packages py3
$ workon py3
$ pip install -r requirements.txt -r test-requirements.txt
$ make test

# or with coverage
$ make test-coverage

If you need to run specified tests, pytest is recommended.

# Activate virtualenv first

# Run tests
$ pytest tests/test_config.py
$ pytest tests/test_config.py -k test_pkgset_mismatch_repos

In the tests/ directory there is a shell script test_compose.sh that you can use to try and create a miniature compose on dummy data. The actual data will be created by running make test-data in project root.

$ sudo dnf -y install rpm-build createrepo_c isomd5sum genisoimage syslinux

# Activate virtualenv (the one created by tox could be used)
$ source .tox/py3/bin/activate

$ python setup.py develop
$ make test-data
$ make test-compose

This testing compose does not actually use all phases that are available, and there is no checking that the result is correct. It only tells you whether it crashed or not.


Even when it finishes successfully, it may print errors about repoclosure on Server-Gluster.x86_64 in test phase. This is not a bug.


You must write documentation for any new features and functional changes. Any code without sufficient documentation may not be merged.

To generate the documentation, run make doc in project root.

Testing Pungi

Test Data

Tests require test data and not all of it is available in git. You must create test repositories before running the tests:

make test-data

Requirements: createrepo_c, rpmbuild

Unit Tests

Unit tests cover functionality of Pungi python modules. You can run all of them at once:

make test

which is shortcut to:

python2 setup.py test
python3 setup.py test

You can alternatively run individual tests:

cd tests
./<test>.py [<class>[.<test>]]

Functional Tests

Because compose is quite complex process and not everything is covered with unit tests yet, the easiest way how to test if your changes did not break anything badly is to start a compose on a relatively small and well defined package set:

cd tests


Daniel Mach


Sep 25, 2023 4.5 Pungi