Your company here — click to reach over 10,000 unique daily visitors

borg - Man Page

BorgBackup is a deduplicating backup program with optional compression and authenticated encryption.

Examples (TL;DR)


Borg consists of a number of commands. Each command accepts a number of arguments and options and interprets various environment variables. The following sections will describe each command in detail.

Commands, options, parameters, paths and such are set in fixed-width. Option values are underlined. Borg has few options accepting a fixed set of values (e.g. --encryption of borg init). Experimental features are marked with red stripes on the sides, like this paragraph.

Experimental features are not stable, which means that they may be changed in incompatible ways or even removed entirely without prior notice in following releases.

Positional Arguments and Options: Order matters

Borg only supports taking options (-s and --progress in the example) to the left or right of all positional arguments (repo::archive and path in the example), but not in between them:

borg create -s --progress repo::archive path  # good and preferred
borg create repo::archive path -s --progress  # also works
borg create -s repo::archive path --progress  # works, but ugly
borg create repo::archive -s --progress path  # BAD

This is due to a problem in the argparse module: https://bugs.python.org/issue15112

Repository URLs

Local filesystem (or locally mounted network filesystem):

/path/to/repo - filesystem path to repo directory, absolute path

path/to/repo - filesystem path to repo directory, relative path

Also, stuff like ~/path/to/repo or ~other/path/to/repo works (this is expanded by your shell).

Note: you may also prepend a file:// to a filesystem path to get URL style.

Remote repositories accessed via ssh user@host:

user@host:/path/to/repo - remote repo, absolute path

ssh://user@host:port/path/to/repo - same, alternative syntax, port can be given

Remote repositories with relative paths can be given using this syntax:

user@host:path/to/repo - path relative to current directory

user@host:~/path/to/repo - path relative to user's home directory

user@host:~other/path/to/repo - path relative to other's home directory

Note: giving user@host:/./path/to/repo or user@host:/~/path/to/repo or user@host:/~other/path/to/repo is also supported, but not required here.

Remote repositories with relative paths, alternative syntax with port:

ssh://user@host:port/./path/to/repo - path relative to current directory

ssh://user@host:port/~/path/to/repo - path relative to user's home directory

ssh://user@host:port/~other/path/to/repo - path relative to other's home directory

If you frequently need the same repo URL, it is a good idea to set the BORG_REPO environment variable to set a default for the repo URL:

export BORG_REPO='ssh://user@host:port/path/to/repo'

Then just leave away the repo URL if only a repo URL is needed and you want to use the default - it will be read from BORG_REPO then.

Use :: syntax to give the repo URL when syntax requires giving a positional argument for the repo (e.g. borg mount :: /mnt).

Repository / Archive Locations

Many commands want either a repository (just give the repo URL, see above) or an archive location, which is a repo URL followed by ::archive_name.

Archive names must not contain the / (slash) character. For simplicity, maybe also avoid blanks or other characters that have special meaning on the shell or in a filesystem (borg mount will use the archive name as directory name).

If you have set BORG_REPO (see above) and an archive location is needed, use ::archive_name - the repo URL part is then read from BORG_REPO.


Borg writes all log output to stderr by default. But please note that something showing up on stderr does not indicate an error condition just because it is on stderr. Please check the log levels of the messages and the return code of borg for determining error, warning or success conditions.

If you want to capture the log output to a file, just redirect it:

borg create repo::archive myfiles 2>> logfile

Custom logging configurations can be implemented via BORG_LOGGING_CONF.

The log level of the builtin logging configuration defaults to WARNING. This is because we want Borg to be mostly silent and only output warnings, errors and critical messages, unless output has been requested by supplying an option that implies output (e.g. --list or --progress).


Use --debug to set DEBUG log level - to get debug, info, warning, error and critical level output.

Use --info (or -v or --verbose) to set INFO log level - to get info, warning, error and critical level output.

Use --warning (default) to set WARNING log level - to get warning, error and critical level output.

Use --error to set ERROR log level - to get error and critical level output.

Use --critical to set CRITICAL log level - to get critical level output.

While you can set misc. log levels, do not expect that every command will give different output on different log levels - it's just a possibility.


Options --critical and --error are provided for completeness, their usage is not recommended as you might miss important information.

Return codes

Borg can exit with the following return codes (rc):

Return codeMeaning
0success (logged as INFO)
1generic warning (operation reached its normal end, but there were warnings -- you should check the log, logged as WARNING)
2generic error (like a fatal error, a local or remote exception, the operation did not reach its normal end, logged as ERROR)
3..99specific error (enabled by BORG_EXIT_CODES=modern)
100..127specific warning (enabled by BORG_EXIT_CODES=modern)
128+Nkilled by signal N (e.g. 137 == kill -9)

If you use --show-rc, the return code is also logged at the indicated level as the last log entry.

Environment Variables

Borg uses some environment variables for automation:


When set, use the value to give the default repository location. If a command needs an archive parameter, you can abbreviate as ::archive. If a command needs a repository parameter, you can either leave it away or abbreviate as ::, if a positional parameter is required.


When set, use the value to answer the passphrase question for encrypted repositories. It is used when a passphrase is needed to access an encrypted repo as well as when a new passphrase should be initially set when initializing an encrypted repo. See also BORG_NEW_PASSPHRASE.


When set, use the standard output of the command (trailing newlines are stripped) to answer the passphrase question for encrypted repositories. It is used when a passphrase is needed to access an encrypted repo as well as when a new passphrase should be initially set when initializing an encrypted repo. Note that the command is executed without a shell. So variables, like $HOME will work, but ~ won't. If BORG_PASSPHRASE is also set, it takes precedence. See also BORG_NEW_PASSPHRASE.


When set, specifies a file descriptor to read a passphrase from. Programs starting borg may choose to open an anonymous pipe and use it to pass a passphrase. This is safer than passing via BORG_PASSPHRASE, because on some systems (e.g. Linux) environment can be examined by other processes. If BORG_PASSPHRASE or BORG_PASSCOMMAND are also set, they take precedence.


When set, use the value to answer the passphrase question when a new passphrase is asked for. This variable is checked first. If it is not set, BORG_PASSPHRASE and BORG_PASSCOMMAND will also be checked. Main usecase for this is to fully automate borg change-passphrase.


When set, use the value to answer the "display the passphrase for verification" question when defining a new passphrase for encrypted repositories.


When set to "modern", the borg process will return more specific exit codes (rc). Default is "legacy" and returns rc 2 for all errors, 1 for all warnings, 0 for success.


Borg usually computes a host id from the FQDN plus the results of uuid.getnode() (which usually returns a unique id based on the MAC address of the network interface. Except if that MAC happens to be all-zero - in that case it returns a random value, which is not what we want (because it kills automatic stale lock removal). So, if you have a all-zero MAC address or other reasons to better externally control the host id, just set this environment variable to a unique value. If all your FQDNs are unique, you can just use the FQDN. If not, use fqdn@uniqueid.


When set, use the given filename as INI-style logging configuration. A basic example conf can be found at docs/misc/logging.conf.


When set, use this command instead of ssh. This can be used to specify ssh options, such as a custom identity file ssh -i /path/to/private/key. See man ssh for other options. Using the --rsh CMD commandline option overrides the environment variable.


When set, use the given path as borg executable on the remote (defaults to "borg" if unset). Using --remote-path PATH commandline option overrides the environment variable.


When set to a value at least one character long, instructs borg to use a specifically named (based on the suffix) alternative files cache. This can be used to avoid loading and saving cache entries for backup sources other than the current sources.


When set to a numeric value, this determines the maximum "time to live" for the files cache entries (default: 20). The files cache is used to quickly determine whether a file is unchanged. The FAQ explains this more detailed in: It always chunks all my files, even unchanged ones!


When set to no (default: yes), system information (like OS, Python version, ...) in exceptions is not shown. Please only use for good reasons as it makes issues harder to analyze.


Choose the lowlevel FUSE implementation borg shall use for borg mount. This is a comma-separated list of implementation names, they are tried in the given order, e.g.:

  • pyfuse3,llfuse: default, first try to load pyfuse3, then try to load llfuse.
  • llfuse,pyfuse3: first try to load llfuse, then try to load pyfuse3.
  • pyfuse3: only try to load pyfuse3
  • llfuse: only try to load llfuse
  • none: do not try to load an implementation

This can be used to influence borg's builtin self-tests. The default is to execute the tests at the beginning of each borg command invocation.

BORG_SELFTEST=disabled can be used to switch off the tests and rather save some time. Disabling is not recommended for normal borg users, but large scale borg storage providers can use this to optimize production servers after at least doing a one-time test borg (with selftests not disabled) when installing or upgrading machines / OS / borg.


A list of comma separated strings that trigger workarounds in borg, e.g. to work around bugs in other software.

Currently known strings are:


Use the more simple BaseSyncFile code to avoid issues with sync_file_range. You might need this to run borg on WSL (Windows Subsystem for Linux) or in systemd.nspawn containers on some architectures (e.g. ARM). Using this does not affect data safety, but might result in a more bursty write to disk behaviour (not continuously streaming to disk).


Retry opening a file without O_NOATIME if opening a file with O_NOATIME caused EROFS. You will need this to make archives from volume shadow copies in WSL1 (Windows Subsystem for Linux 1).


Work around a lost passphrase or key for an authenticated mode repository (these are only authenticated, but not encrypted). If the key is missing in the repository config, add key = anything there.

This workaround is only for emergencies and only to extract data from an affected repository (read-only access):

BORG_WORKAROUNDS=authenticated_no_key borg extract repo::archive

After you have extracted all data you need, you MUST delete the repository:

BORG_WORKAROUNDS=authenticated_no_key borg delete repo

Now you can init a fresh repo. Make sure you do not use the workaround any more.


Work around invalid archive TAMs created by borg < 1.2.5, see #7791.

This workaround likely needs to get used only once when following the upgrade instructions for CVE-2023-36811, see Pre-1.2.5 archives spoofing vulnerability (CVE-2023-36811).

In normal production operations, this workaround should never be used.

Some automatic "answerers" (if set, they automatically answer confirmation questions):

For "Warning: Attempting to access a previously unknown unencrypted repository"


For "Warning: The repository at location ... was previously located at ..."


For "This is a potentially dangerous function..." (check --repair)


For "You requested to completely DELETE the repository including all archives it contains:"

Note: answers are case sensitive. setting an invalid answer value might either give the default answer or ask you interactively, depending on whether retries are allowed (they by default are allowed). So please test your scripts interactively before making them a non-interactive script.

Directories and files:

Defaults to $HOME or ~$USER or ~ (in that order). If you want to move all borg-specific folders to a custom path at once, all you need to do is to modify BORG_BASE_DIR: the other paths for cache, config etc. will adapt accordingly (assuming you didn't set them to a different custom value).


Defaults to $BORG_BASE_DIR/.cache/borg. If BORG_BASE_DIR is not explicitly set while XDG env var XDG_CACHE_HOME is set, then $XDG_CACHE_HOME/borg is being used instead. This directory contains the local cache and might need a lot of space for dealing with big repositories. Make sure you're aware of the associated security aspects of the cache location: Do I need to take security precautions regarding the cache?


Defaults to $BORG_BASE_DIR/.config/borg. If BORG_BASE_DIR is not explicitly set while XDG env var XDG_CONFIG_HOME is set, then $XDG_CONFIG_HOME/borg is being used instead. This directory contains all borg configuration directories, see the FAQ for a security advisory about the data in this directory: How important is the $HOME/.config/borg directory?


Defaults to $BORG_CONFIG_DIR/security. This directory contains information borg uses to track its usage of NONCES ("numbers used once" - usually in encryption context) and other security relevant data.


Defaults to $BORG_CONFIG_DIR/keys. This directory contains keys for encrypted repositories.


When set, use the given path as repository key file. Please note that this is only for rather special applications that externally fully manage the key files:

  • this setting only applies to the keyfile modes (not to the repokey modes).
  • using a full, absolute path to the key file is recommended.
  • all directories in the given path must exist.
  • this setting forces borg to use the key file at the given location.
  • the key file must either exist (for most commands) or will be created (borg init).
  • you need to give a different path for different repositories.
  • you need to point to the correct key file matching the repository the command will operate on.

This is where temporary files are stored (might need a lot of temporary space for some operations), see tempfile for details.


Adds given OpenSSL header file directory to the default locations (setup.py).


Adds given prefix directory to the default locations. If a 'include/lz4.h' is found Borg will be linked against the system liblz4 instead of a bundled implementation. (setup.py)


Adds given prefix directory to the default locations. If a 'include/zstd.h' is found Borg will be linked against the system libzstd instead of a bundled implementation. (setup.py)

Please note:

  • Be very careful when using the "yes" sayers, the warnings with prompt exist for your / your data's security/safety.
  • Also be very careful when putting your passphrase into a script, make sure it has appropriate file permissions (e.g. mode 600, root:root).

File systems

We strongly recommend against using Borg (or any other database-like software) on non-journaling file systems like FAT, since it is not possible to assume any consistency in case of power failures (or a sudden disconnect of an external drive or similar failures).

While Borg uses a data store that is resilient against these failures when used on journaling file systems, it is not possible to guarantee this with some hardware -- independent of the software used. We don't know a list of affected hardware.

If you are suspicious whether your Borg repository is still consistent and readable after one of the failures mentioned above occurred, run borg check --verify-data to make sure it is consistent.

Requirements for Borg repository file systems

  • Long file names
  • At least three directory levels with short names
  • Typically, file sizes up to a few hundred MB. Large repositories may require large files (>2 GB).
  • Up to 1000 files per directory.
  • rename(2) / MoveFile(Ex) should work as specified, i.e. on the same file system it should be a move (not a copy) operation, and in case of a directory it should fail if the destination exists and is not an empty directory, since this is used for locking.
  • Hardlinks are needed for borg upgrade (if --inplace option is not used). Also hardlinks are used for more safe and secure file updating (e.g. of the repo config file), but the code tries to work also if hardlinks are not supported.


To display quantities, Borg takes care of respecting the usual conventions of scale. Disk sizes are displayed in decimal, using powers of ten (so kB means 1000 bytes). For memory usage, binary prefixes are used, and are indicated using the IEC binary prefixes, using powers of two (so KiB means 1024 bytes).

Date and Time

We format date and time conforming to ISO-8601, that is: YYYY-MM-DD and HH:MM:SS (24h clock).

For more information about that, see: https://xkcd.com/1179/

Unless otherwise noted, we display local date and time. Internally, we store and process date and time as UTC.

Resource Usage

Borg might use a lot of resources depending on the size of the data set it is dealing with.

If one uses Borg in a client/server way (with a ssh: repository), the resource usage occurs in part on the client and in another part on the server.

If one uses Borg as a single process (with a filesystem repo), all the resource usage occurs in that one process, so just add up client + server to get the approximate resource usage.

CPU client:
  • borg create: does chunking, hashing, compression, crypto (high CPU usage)
  • chunks cache sync: quite heavy on CPU, doing lots of hashtable operations.
  • borg extract: crypto, decompression (medium to high CPU usage)
  • borg check: similar to extract, but depends on options given.
  • borg prune / borg delete archive: low to medium CPU usage
  • borg delete repo: done on the server

It won't go beyond 100% of 1 core as the code is currently single-threaded. Especially higher zlib and lzma compression levels use significant amounts of CPU cycles. Crypto might be cheap on the CPU (if hardware accelerated) or expensive (if not).

CPU server:

It usually doesn't need much CPU, it just deals with the key/value store (repository) and uses the repository index for that.

borg check: the repository check computes the checksums of all chunks (medium CPU usage) borg delete repo: low CPU usage

CPU (only for client/server operation):

When using borg in a client/server way with a ssh:-type repo, the ssh processes used for the transport layer will need some CPU on the client and on the server due to the crypto they are doing - esp. if you are pumping big amounts of data.

Memory (RAM) client:

The chunks index and the files index are read into memory for performance reasons. Might need big amounts of memory (see below). Compression, esp. lzma compression with high levels might need substantial amounts of memory.

Memory (RAM) server:

The server process will load the repository index into memory. Might need considerable amounts of memory, but less than on the client (see below).

Chunks index (client only):

Proportional to the amount of data chunks in your repo. Lots of chunks in your repo imply a big chunks index. It is possible to tweak the chunker params (see create options).

Files index (client only):

Proportional to the amount of files in your last backups. Can be switched off (see create options), but next backup might be much slower if you do. The speed benefit of using the files cache is proportional to file size.

Repository index (server only):

Proportional to the amount of data chunks in your repo. Lots of chunks in your repo imply a big repository index. It is possible to tweak the chunker params (see create options) to influence the amount of chunks being created.

Temporary files (client):

Reading data and metadata from a FUSE mounted repository will consume up to the size of all deduplicated, small chunks in the repository. Big chunks won't be locally cached.

Temporary files (server):

A non-trivial amount of data will be stored on the remote temp directory for each client that connects to it. For some remotes, this can fill the default temporary directory at /tmp. This can be remediated by ensuring the $TMPDIR, $TEMP, or $TMP environment variable is properly set for the sshd process. For some OSes, this can be done just by setting the correct value in the .bashrc (or equivalent login config file for other shells), however in other cases it may be necessary to first enable PermitUserEnvironment yes in your sshd_config file, then add environment="TMPDIR=/my/big/tmpdir" at the start of the public key to be used in the authorized_hosts file.

Cache files (client only):

Contains the chunks index and files index (plus a collection of single- archive chunk indexes which might need huge amounts of disk space, depending on archive count and size - see FAQ about how to reduce).

Network (only for client/server operation):

If your repository is remote, all deduplicated (and optionally compressed/ encrypted) data of course has to go over the connection (ssh:// repo url). If you use a locally mounted network filesystem, additionally some copy operations used for transaction support also go over the connection. If you backup multiple sources to one target repository, additional traffic happens for cache resynchronization.

Support for file metadata

Besides regular file and directory structures, Borg can preserve

  • symlinks (stored as symlink, the symlink is not followed)
  • special files:

    • character and block device files (restored via mknod)
    • FIFOs ("named pipes")
    • special file contents can be backed up in --read-special mode. By default the metadata to create them with mknod(2), mkfifo(2) etc. is stored.
  • hardlinked regular files, devices, FIFOs (considering all items in the same archive)
  • timestamps in nanosecond precision: mtime, atime, ctime
  • other timestamps: birthtime (on platforms supporting it)
  • permissions:

    • IDs of owning user and owning group
    • names of owning user and owning group (if the IDs can be resolved)
    • Unix Mode/Permissions (u/g/o permissions, suid, sgid, sticky)

On some platforms additional features are supported:

PlatformACLs [5]xattr [6]Flags [7]
LinuxYesYesYes [1]
macOSYesYesYes (all)
FreeBSDYesYesYes (all)
OpenBSDn/an/aYes (all)
NetBSDn/aNo [2]Yes (all)
Solaris and derivativesNo [3]No [3]n/a
Windows (cygwin)No [4]NoNo

Other Unix-like operating systems may work as well, but have not been tested at all.

Note that most of the platform-dependent features also depend on the file system. For example, ntfs-3g on Linux isn't able to convey NTFS ACLs.


Only "nodump", "immutable", "compressed" and "append" are supported. Feature request #618 for more flags.


Feature request #1332


Feature request #1337


Cygwin tries to map NTFS ACLs to permissions with varying degrees of success.


The native access control list mechanism of the OS. This normally limits access to non-native ACLs. For example, NTFS ACLs aren't completely accessible on Linux with ntfs-3g.


extended attributes; key-value pairs attached to a file, mainly used by the OS. This includes resource forks on Mac OS X.


aka BSD flags. The Linux set of flags [1] is portable across platforms. The BSDs define additional flags.

In case you are interested in more details (like formulas), please see Internals. For details on the available JSON output, refer to All about JSON: How to develop frontends.

Common options

All Borg commands share these options:

-h,  --help

show this help message and exit


work on log level CRITICAL


work on log level ERROR


work on log level WARNING (default)

--info,  -v,  --verbose

work on log level INFO


enable debug output, work on log level DEBUG

--debug-topic TOPIC

enable TOPIC debugging (can be specified multiple times). The logger path is borg.debug.<TOPIC> if TOPIC is not fully qualified.

-p,  --progress

show progress information


format using IEC units (1KiB = 1024B)


Output one JSON object per log line instead of formatted text.

--lock-wait SECONDS

wait at most SECONDS for acquiring a repository/cache lock (default: 1).


Bypass locking mechanism


show/log the borg version


show/log the return code (rc)

--umask M

set umask to M (local only, default: 0077)

--remote-path PATH

use PATH as borg executable on the remote (default: "borg")

--remote-ratelimit RATE

deprecated, use --upload-ratelimit instead

--upload-ratelimit RATE

set network upload rate limit in kiByte/s (default: 0=unlimited)

--remote-buffer UPLOAD_BUFFER

deprecated, use --upload-buffer instead

--upload-buffer UPLOAD_BUFFER

set network upload buffer size in MiB. (default: 0=no buffer)


treat part files like normal files (e.g. to list/extract them)

--debug-profile FILE

Write execution profile in Borg format into FILE. For local use a Python-compatible file can be generated by suffixing FILE with ".pyprof".

--rsh RSH

Use this command to connect to the 'borg serve' process (default: 'ssh')

Option --bypass-lock allows you to access the repository while bypassing borg's locking mechanism. This is necessary if your repository is on a read-only storage where you don't have write permissions or capabilities and therefore cannot create a lock. Examples are repositories stored on a Bluray disc or a read-only network storage. Avoid this option if you are able to use locks as that is the safer way; see the warning below.


If you do use --bypass-lock, you are responsible to ensure that no other borg instances have write access to the repository. Otherwise, you might experience errors and read broken data if changes to that repository are being made at the same time.


# Create an archive and log: borg version, files list, return code
$ borg create --show-version --list --show-rc /path/to/repo::my-files files

Borg Init

borg [common options] init [options] [REPOSITORY]


This command initializes an empty repository. A repository is a filesystem directory containing the deduplicated data from zero or more archives.

Encryption mode TLDR

The encryption mode can only be configured when creating a new repository - you can neither configure it on a per-archive basis nor change the encryption mode of an existing repository.

Use repokey:

borg init --encryption repokey /path/to/repo

Or repokey-blake2 depending on which is faster on your client machines (see below):

borg init --encryption repokey-blake2 /path/to/repo

Borg will:

  1. Ask you to come up with a passphrase.
  2. Create a borg key (which contains 3 random secrets. See Key files).
  3. Encrypt the key with your passphrase.
  4. Store the encrypted borg key inside the repository directory (in the repo config). This is why it is essential to use a secure passphrase.
  5. Encrypt and sign your backups to prevent anyone from reading or forging them unless they have the key and know the passphrase. Make sure to keep a backup of your key outside the repository - do not lock yourself out by "leaving your keys inside your car" (see borg key export). For remote backups the encryption is done locally - the remote machine never sees your passphrase, your unencrypted key or your unencrypted files. Chunking and id generation are also based on your key to improve your privacy.
  6. Use the key when extracting files to decrypt them and to verify that the contents of the backups have not been accidentally or maliciously altered.

Picking a passphrase

Make sure you use a good passphrase. Not too short, not too simple. The real encryption / decryption key is encrypted with / locked by your passphrase. If an attacker gets your key, he can't unlock and use it without knowing the passphrase.

Be careful with special or non-ascii characters in your passphrase:

  • Borg processes the passphrase as unicode (and encodes it as utf-8), so it does not have problems dealing with even the strangest characters.
  • BUT: that does not necessarily apply to your OS / VM / keyboard configuration.

So better use a long passphrase made from simple ascii chars than one that includes non-ascii stuff or characters that are hard/impossible to enter on a different keyboard layout.

You can change your passphrase for existing repos at any time, it won't affect the encryption/decryption key or other secrets.

More encryption modes

Only use --encryption none if you are OK with anyone who has access to your repository being able to read your backups and tamper with their contents without you noticing.

If you want "passphrase and having-the-key" security, use --encryption keyfile. The key will be stored in your home directory (in ~/.config/borg/keys).

If you do not want to encrypt the contents of your backups, but still want to detect malicious tampering use --encryption authenticated. To normally work with authenticated repos, you will need the passphrase, but there is an emergency workaround, see BORG_WORKAROUNDS=authenticated_no_key docs.

If BLAKE2b is faster than SHA-256 on your hardware, use --encryption authenticated-blake2, --encryption repokey-blake2 or --encryption keyfile-blake2. Note: for remote backups the hashing is done on your local machine.

Hash/MACNot encrypted no authNot encrypted, but authenticatedEncrypted (AEAD w/ AES) and authenticated
SHA-256noneauthenticatedrepokey keyfile
BLAKE2bn/aauthenticated-blake2repokey-blake2 keyfile-blake2

Modes marked like this in the above table are new in Borg 1.1 and are not backwards-compatible with Borg 1.0.x.

On modern Intel/AMD CPUs (except very cheap ones), AES is usually hardware-accelerated. BLAKE2b is faster than SHA256 on Intel/AMD 64-bit CPUs (except AMD Ryzen and future CPUs with SHA extensions), which makes authenticated-blake2 faster than none and authenticated.

On modern ARM CPUs, NEON provides hardware acceleration for SHA256 making it faster than BLAKE2b-256 there. NEON accelerates AES as well.

Hardware acceleration is always used automatically when available.

repokey and keyfile use AES-CTR-256 for encryption and HMAC-SHA256 for authentication in an encrypt-then-MAC (EtM) construction. The chunk ID hash is HMAC-SHA256 as well (with a separate key). These modes are compatible with Borg 1.0.x.

repokey-blake2 and keyfile-blake2 are also authenticated encryption modes, but use BLAKE2b-256 instead of HMAC-SHA256 for authentication. The chunk ID hash is a keyed BLAKE2b-256 hash. These modes are new and not compatible with Borg 1.0.x.

authenticated mode uses no encryption, but authenticates repository contents through the same HMAC-SHA256 hash as the repokey and keyfile modes (it uses it as the chunk ID hash). The key is stored like repokey. This mode is new and not compatible with Borg 1.0.x.

authenticated-blake2 is like authenticated, but uses the keyed BLAKE2b-256 hash from the other blake2 modes. This mode is new and not compatible with Borg 1.0.x.

none mode uses no encryption and no authentication. It uses SHA256 as chunk ID hash. This mode is not recommended, you should rather consider using an authenticated or authenticated/encrypted mode. This mode has possible denial-of-service issues when running borg create on contents controlled by an attacker. Use it only for new repositories where no encryption is wanted and when compatibility with 1.0.x is important. If compatibility with 1.0.x is not important, use authenticated-blake2 or authenticated instead. This mode is compatible with Borg 1.0.x.


# Local repository, repokey encryption, BLAKE2b (often faster, since Borg 1.1)
$ borg init --encryption=repokey-blake2 /path/to/repo

# Local repository (no encryption)
$ borg init --encryption=none /path/to/repo

# Remote repository (accesses a remote borg via ssh)
# repokey: stores the (encrypted) key into <REPO_DIR>/config
$ borg init --encryption=repokey-blake2 user@hostname:backup

# Remote repository (accesses a remote borg via ssh)
# keyfile: stores the (encrypted) key into ~/.config/borg/keys/
$ borg init --encryption=keyfile user@hostname:backup

Borg Create

borg [common options] create [options] ARCHIVE [PATH...]


This command creates a backup archive containing all files found while recursively traversing all paths specified. Paths are added to the archive as they are given, that means if relative paths are desired, the command has to be run from the correct directory.

The slashdot hack in paths (recursion roots) is triggered by using /./: /this/gets/stripped/./this/gets/archived means to process that fs object, but strip the prefix on the left side of ./ from the archived items (in this case, this/gets/archived will be the path in the archived item).

When giving '-' as path, borg will read data from standard input and create a file 'stdin' in the created archive from that data. In some cases it's more appropriate to use --content-from-command, however. See section Reading from stdin below for details.

The archive will consume almost no disk space for files or parts of files that have already been stored in other archives.

The archive name needs to be unique. It must not end in '.checkpoint' or '.checkpoint.N' (with N being a number), because these names are used for checkpoints and treated in special ways.

In the archive name, you may use the following placeholders: {now}, {utcnow}, {fqdn}, {hostname}, {user} and some others.

Backup speed is increased by not reprocessing files that are already part of existing archives and weren't modified. The detection of unmodified files is done by comparing multiple file metadata values with previous values kept in the files cache.

This comparison can operate in different modes as given by --files-cache:

  • ctime,size,inode (default)
  • mtime,size,inode (default behaviour of borg versions older than 1.1.0rc4)
  • ctime,size (ignore the inode number)
  • mtime,size (ignore the inode number)
  • rechunk,ctime (all files are considered modified - rechunk, cache ctime)
  • rechunk,mtime (all files are considered modified - rechunk, cache mtime)
  • disabled (disable the files cache, all files considered modified - rechunk)

inode number: better safety, but often unstable on network filesystems

Normally, detecting file modifications will take inode information into consideration to improve the reliability of file change detection. This is problematic for files located on sshfs and similar network file systems which do not provide stable inode numbers, such files will always be considered modified. You can use modes without inode in this case to improve performance, but reliability of change detection might be reduced.

ctime vs. mtime: safety vs. speed

  • ctime is a rather safe way to detect changes to a file (metadata and contents) as it can not be set from userspace. But, a metadata-only change will already update the ctime, so there might be some unnecessary chunking/hashing even without content changes. Some filesystems do not support ctime (change time). E.g. doing a chown or chmod to a file will change its ctime.
  • mtime usually works and only updates if file contents were changed. But mtime can be arbitrarily set from userspace, e.g. to set mtime back to the same value it had before a content change happened. This can be used maliciously as well as well-meant, but in both cases mtime based cache modes can be problematic.

The mount points of filesystems or filesystem snapshots should be the same for every creation of a new archive to ensure fast operation. This is because the file cache that is used to determine changed files quickly uses absolute filenames. If this is not possible, consider creating a bind mount to a stable location.

The --progress option shows (from left to right) Original, Compressed and Deduplicated (O, C and D, respectively), then the Number of files (N) processed so far, followed by the currently processed path.

When using --stats, you will get some statistics about how much data was added - the "This Archive" deduplicated size there is most interesting as that is how much your repository will grow. Please note that the "All archives" stats refer to the state after creation. Also, the --stats and --dry-run options are mutually exclusive because the data is not actually compressed and deduplicated during a dry run.

For more help on include/exclude patterns, see the borg help patterns command output.

For more help on placeholders, see the borg help placeholders command output.

The --exclude patterns are not like tar. In tar --exclude .bundler/gems will exclude foo/.bundler/gems. In borg it will not, you need to use --exclude '*/.bundler/gems' to get the same effect.

In addition to using --exclude patterns, it is possible to use --exclude-if-present to specify the name of a filesystem object (e.g. a file or folder name) which, when contained within another folder, will prevent the containing folder from being backed up.  By default, the containing folder and all of its contents will be omitted from the backup.  If, however, you wish to only include the objects specified by --exclude-if-present in your backup, and not include any other contents of the containing folder, this can be enabled through using the --keep-exclude-tags option.

The -x or --one-file-system option excludes directories, that are mountpoints (and everything in them). It detects mountpoints by comparing the device number from the output of stat() of the directory and its parent directory. Specifically, it excludes directories for which stat() reports a device number different from the device number of their parent. In general: be aware that there are directories with device number different from their parent, which the kernel does not consider a mountpoint and also the other way around. Linux examples for this are bind mounts (possibly same device number, but always a mountpoint) and ALL subvolumes of a btrfs (different device number from parent but not necessarily a mountpoint). macOS examples are the apfs mounts of a typical macOS installation. Therefore, when using --one-file-system, you should double-check that the backup works as intended.

Item flags

--list outputs a list of all files, directories and other file system items it considered (no matter whether they had content changes or not). For each item, it prefixes a single-letter flag that indicates type and/or status of the item.

If you are interested only in a subset of that output, you can give e.g. --filter=AME and it will only show regular files with A, M or E status (see below).

A uppercase character represents the status of a regular file relative to the "files" cache (not relative to the repo -- this is an issue if the files cache is not used). Metadata is stored in any case and for 'A' and 'M' also new data chunks are stored. For 'U' all data chunks refer to already existing chunks.

  • 'A' = regular file, added (see also I am seeing 'A' (added) status for an unchanged file!? in the FAQ)
  • 'M' = regular file, modified
  • 'U' = regular file, unchanged
  • 'C' = regular file, it changed while we backed it up
  • 'E' = regular file, an error happened while accessing/reading this file

A lowercase character means a file type other than a regular file, borg usually just stores their metadata:

  • 'd' = directory
  • 'b' = block device
  • 'c' = char device
  • 'h' = regular file, hardlink (to already seen inodes)
  • 's' = symlink
  • 'f' = fifo

Other flags used include:

  • 'i' = backup data was read from standard input (stdin)
  • '-' = dry run, item was not backed up
  • 'x' = excluded, item was not backed up
  • '?' = missing status code (if you see this, please file a bug report!)

Reading backup data from stdin

There are two methods to read from stdin. Either specify - as path and pipe directly to borg:

backup-vm --id myvm --stdout | borg create REPO::ARCHIVE -

Or use --content-from-command to have Borg manage the execution of the command and piping. If you do so, the first PATH argument is interpreted as command to execute and any further arguments are treated as arguments to the command:

borg create --content-from-command REPO::ARCHIVE -- backup-vm --id myvm --stdout

-- is used to ensure --id and --stdout are not considered arguments to borg but rather backup-vm.

The difference between the two approaches is that piping to borg creates an archive even if the command piping to borg exits with a failure. In this case, one can end up with truncated output being backed up. Using --content-from-command, in contrast, borg is guaranteed to fail without creating an archive should the command fail. The command is considered failed when it returned a non-zero exit code.

Reading from stdin yields just a stream of data without file metadata associated with it, and the files cache is not needed at all. So it is safe to disable it via --files-cache disabled and speed up backup creation a bit.

By default, the content read from stdin is stored in a file called 'stdin'. Use --stdin-name to change the name.

Feeding all file paths from externally

Usually, you give a starting path (recursion root) to borg and then borg automatically recurses, finds and backs up all fs objects contained in there (optionally considering include/exclude rules).

If you need more control and you want to give every single fs object path to borg (maybe implementing your own recursion or your own rules), you can use --paths-from-stdin or --paths-from-command (with the latter, borg will fail to create an archive should the command fail).

Borg supports paths with the slashdot hack to strip path prefixes here also. So, be careful not to unintentionally trigger that.


# Backup ~/Documents into an archive named "my-documents"
$ borg create /path/to/repo::my-documents ~/Documents

# same, but list all files as we process them
$ borg create --list /path/to/repo::my-documents ~/Documents

# Backup /mnt/disk/docs, but strip path prefix using the slashdot hack
$ borg create /path/to/repo::docs /mnt/disk/./docs

# Backup ~/Documents and ~/src but exclude pyc files
$ borg create /path/to/repo::my-files \
    ~/Documents                       \
    ~/src                             \
    --exclude '*.pyc'

# Backup home directories excluding image thumbnails (i.e. only
# /home/<one directory>/.thumbnails is excluded, not /home/*/*/.thumbnails etc.)
$ borg create /path/to/repo::my-files /home \
    --exclude 'sh:home/*/.thumbnails'

# Backup the root filesystem into an archive named "root-YYYY-MM-DD"
# use zlib compression (good, but slow) - default is lz4 (fast, low compression ratio)
$ borg create -C zlib,6 --one-file-system /path/to/repo::root-{now:%Y-%m-%d} /

# Backup onto a remote host ("push" style) via ssh to port 2222,
# logging in as user "borg" and storing into /path/to/repo
$ borg create ssh://borg@backup.example.org:2222/path/to/repo::{fqdn}-root-{now} /

# Backup a remote host locally ("pull" style) using sshfs
$ mkdir sshfs-mount
$ sshfs root@example.com:/ sshfs-mount
$ cd sshfs-mount
$ borg create /path/to/repo::example.com-root-{now:%Y-%m-%d} .
$ cd ..
$ fusermount -u sshfs-mount

# Make a big effort in fine granular deduplication (big chunk management
# overhead, needs a lot of RAM and disk space, see formula in internals
# docs - same parameters as borg < 1.0 or attic):
$ borg create --chunker-params buzhash,10,23,16,4095 /path/to/repo::small /smallstuff

# Backup a raw device (must not be active/in use/mounted at that time)
$ borg create --read-special --chunker-params fixed,4194304 /path/to/repo::my-sdx /dev/sdX

# Backup a sparse disk image (must not be active/in use/mounted at that time)
$ borg create --sparse --chunker-params fixed,4194304 /path/to/repo::my-disk my-disk.raw

# No compression (none)
$ borg create --compression none /path/to/repo::arch ~

# Super fast, low compression (lz4, default)
$ borg create /path/to/repo::arch ~

# Less fast, higher compression (zlib, N = 0..9)
$ borg create --compression zlib,N /path/to/repo::arch ~

# Even slower, even higher compression (lzma, N = 0..9)
$ borg create --compression lzma,N /path/to/repo::arch ~

# Only compress compressible data with lzma,N (N = 0..9)
$ borg create --compression auto,lzma,N /path/to/repo::arch ~

# Use short hostname, user name and current time in archive name
$ borg create /path/to/repo::{hostname}-{user}-{now} ~
# Similar, use the same datetime format that is default as of borg 1.1
$ borg create /path/to/repo::{hostname}-{user}-{now:%Y-%m-%dT%H:%M:%S} ~
# As above, but add nanoseconds
$ borg create /path/to/repo::{hostname}-{user}-{now:%Y-%m-%dT%H:%M:%S.%f} ~

# Backing up relative paths by moving into the correct directory first
$ cd /home/user/Documents
# The root directory of the archive will be "projectA"
$ borg create /path/to/repo::daily-projectA-{now:%Y-%m-%d} projectA

# Use external command to determine files to archive
# Use --paths-from-stdin with find to only backup files less than 1MB in size
$ find ~ -size -1000k | borg create --paths-from-stdin /path/to/repo::small-files-only
# Use --paths-from-command with find to only backup files from a given user
$ borg create --paths-from-command /path/to/repo::joes-files -- find /srv/samba/shared -user joe
# Use --paths-from-stdin with --paths-delimiter (for example, for filenames with newlines in them)
$ find ~ -size -1000k -print0 | borg create \
    --paths-from-stdin \
    --paths-delimiter "\0" \

Borg Extract

borg [common options] extract [options] ARCHIVE [PATH...]


This command extracts the contents of an archive. By default the entire archive is extracted but a subset of files and directories can be selected by passing a list of PATHs as arguments. The file selection can further be restricted by using the --exclude option.

For more help on include/exclude patterns, see the borg help patterns command output.

By using --dry-run, you can do all extraction steps except actually writing the output data: reading metadata and data chunks from the repo, checking the hash/hmac, decrypting, decompressing.

--progress can be slower than no progress display, since it makes one additional pass over the archive metadata.


Currently, extract always writes into the current working directory ("."), so make sure you cd to the right place before calling borg extract.

When parent directories are not extracted (because of using file/directory selection or any other reason), borg can not restore parent directories' metadata, e.g. owner, group, permission, etc.


# Extract entire archive
$ borg extract /path/to/repo::my-files

# Extract entire archive and list files while processing
$ borg extract --list /path/to/repo::my-files

# Verify whether an archive could be successfully extracted, but do not write files to disk
$ borg extract --dry-run /path/to/repo::my-files

# Extract the "src" directory
$ borg extract /path/to/repo::my-files home/USERNAME/src

# Extract the "src" directory but exclude object files
$ borg extract /path/to/repo::my-files home/USERNAME/src --exclude '*.o'

# Restore a raw device (must not be active/in use/mounted at that time)
$ borg extract --stdout /path/to/repo::my-sdx | dd of=/dev/sdx bs=10M

Borg Check

borg [common options] check [options] [REPOSITORY_OR_ARCHIVE]


The check command verifies the consistency of a repository and its archives. It consists of two major steps:

  1. Checking the consistency of the repository itself. This includes checking the segment magic headers, and both the metadata and data of all objects in the segments. The read data is checked by size and CRC. Bit rot and other types of accidental damage can be detected this way. Running the repository check can be split into multiple partial checks using --max-duration. When checking a remote repository, please note that the checks run on the server and do not cause significant network traffic.
  2. Checking consistency and correctness of the archive metadata and optionally archive data (requires --verify-data). This includes ensuring that the repository manifest exists, the archive metadata chunk is present, and that all chunks referencing files (items) in the archive exist. This requires reading archive and file metadata, but not data. To cryptographically verify the file (content) data integrity pass --verify-data, but keep in mind that this requires reading all data and is hence very time consuming. When checking archives of a remote repository, archive checks run on the client machine because they require decrypting data and therefore the encryption key.

Both steps can also be run independently. Pass --repository-only to run the repository checks only, or pass --archives-only to run the archive checks only.

The --max-duration option can be used to split a long-running repository check into multiple partial checks. After the given number of seconds the check is interrupted. The next partial check will continue where the previous one stopped, until the full repository has been checked. Assuming a complete check would take 7 hours, then running a daily check with --max-duration=3600 (1 hour) would result in one full repository check per week. Doing a full repository check aborts any previous partial check; the next partial check will restart from the beginning. With partial repository checks you can run neither archive checks, nor enable repair mode. Consequently, if you want to use --max-duration you must also pass --repository-only, and must not pass --archives-only, nor --repair.

Warning: Please note that partial repository checks (i.e. running it with --max-duration) can only perform non-cryptographic checksum checks on the segment files. A full repository check (i.e. without --max-duration) can also do a repository index check. Enabling partial repository checks excepts archive checks for the same reason. Therefore partial checks may be useful with very large repositories only where a full check would take too long.

The --verify-data option will perform a full integrity verification (as opposed to checking the CRC32 of the segment) of data, which means reading the data from the repository, decrypting and decompressing it. It is a complete cryptographic verification and hence very time consuming, but will detect any accidental and malicious corruption. Tamper-resistance is only guaranteed for encrypted repositories against attackers without access to the keys. You can not use --verify-data with --repository-only.

About repair mode

The check command is a readonly task by default. If any corruption is found, Borg will report the issue and proceed with checking. To actually repair the issues found, pass --repair.


--repair is a POTENTIALLY DANGEROUS FEATURE and might lead to data loss! This does not just include data that was previously lost anyway, but might include more data for kinds of corruption it is not capable of dealing with. BE VERY CAREFUL!

Pursuant to the previous warning it is also highly recommended to test the reliability of the hardware running Borg with stress testing software. This especially includes storage and memory testers. Unreliable hardware might lead to additional data loss.

It is highly recommended to create a backup of your repository before running in repair mode (i.e. running it with --repair).

Repair mode will attempt to fix any corruptions found. Fixing corruptions does not mean recovering lost data: Borg can not magically restore data lost due to e.g. a hardware failure. Repairing a repository means sacrificing some data for the sake of the repository as a whole and the remaining data. Hence it is, by definition, a potentially lossy task.

In practice, repair mode hooks into both the repository and archive checks:

  1. When checking the repository's consistency, repair mode will try to recover as many objects from segments with integrity errors as possible, and ensure that the index is consistent with the data stored in the segments.
  2. When checking the consistency and correctness of archives, repair mode might remove whole archives from the manifest if their archive metadata chunk is corrupt or lost. On a chunk level (i.e. the contents of files), repair mode will replace corrupt or lost chunks with a same-size replacement chunk of zeroes. If a previously zeroed chunk reappears, repair mode will restore this lost chunk using the new chunk. Lastly, repair mode will also delete orphaned chunks (e.g. caused by read errors while creating the archive).

Most steps taken by repair mode have a one-time effect on the repository, like removing a lost archive from the repository. However, replacing a corrupt or lost chunk with an all-zero replacement will have an ongoing effect on the repository: When attempting to extract a file referencing an all-zero chunk, the extract command will distinctly warn about it. The FUSE filesystem created by the mount command will reject reading such a "zero-patched" file unless a special mount option is given.

As mentioned earlier, Borg might be able to "heal" a "zero-patched" file in repair mode, if all its previously lost chunks reappear (e.g. via a later backup). This is achieved by Borg not only keeping track of the all-zero replacement chunks, but also by keeping metadata about the lost chunks. In repair mode Borg will check whether a previously lost chunk reappeared and will replace the all-zero replacement chunk by the reappeared chunk. If all lost chunks of a "zero-patched" file reappear, this effectively "heals" the file. Consequently, if lost chunks were repaired earlier, it is advised to run --repair a second time after creating some new backups.

Borg Rename

borg [common options] rename [options] ARCHIVE NEWNAME


This command renames an archive in the repository.

This results in a different archive ID.


$ borg create /path/to/repo::archivename ~
$ borg list /path/to/repo
archivename                          Mon, 2016-02-15 19:50:19

$ borg rename /path/to/repo::archivename newname
$ borg list /path/to/repo
newname                              Mon, 2016-02-15 19:50:19

Borg List

borg [common options] list [options] [REPOSITORY_OR_ARCHIVE] [PATH...]


This command lists the contents of a repository or an archive.

For more help on include/exclude patterns, see the borg help patterns command output.

The FORMAT specifier syntax

The --format option uses python's format string syntax.


$ borg list --format '{archive}{NL}' /path/to/repo

# {VAR:NUMBER} - pad to NUMBER columns.
# Strings are left-aligned, numbers are right-aligned.
# Note: time columns except ``isomtime``, ``isoctime`` and ``isoatime`` cannot be padded.
$ borg list --format '{archive:36} {time} [{id}]{NL}' /path/to/repo
ArchiveFoo                           Thu, 2021-12-09 10:22:28 [0b8e9a312bef3f2f6e2d0fc110c196827786c15eba0188738e81697a7fa3b274]
$ borg list --format '{mode} {user:6} {group:6} {size:8} {mtime} {path}{extra}{NL}' /path/to/repo::ArchiveFoo
-rw-rw-r-- user   user       1024 Thu, 2021-12-09 10:22:17 file-foo

# {VAR:<NUMBER} - pad to NUMBER columns left-aligned.
# {VAR:>NUMBER} - pad to NUMBER columns right-aligned.
$ borg list --format '{mode} {user:>6} {group:>6} {size:<8} {mtime} {path}{extra}{NL}' /path/to/repo::ArchiveFoo
-rw-rw-r--   user   user 1024     Thu, 2021-12-09 10:22:17 file-foo

The following keys are always available:

  • NEWLINE: OS dependent line separator
  • NL: alias of NEWLINE
  • NUL: NUL character for creating print0 / xargs -0 like output, see barchive and bpath keys below
  • TAB
  • CR
  • LF

Keys available only when listing archives in a repository:

  • archive: archive name interpreted as text (might be missing non-text characters, see barchive)
  • name: alias of "archive"
  • barchive: verbatim archive name, can contain any character except NUL
  • comment: archive comment interpreted as text (might be missing non-text characters, see bcomment)
  • bcomment: verbatim archive comment, can contain any character except NUL
  • id: internal ID of the archive
  • tam: TAM authentication state of this archive
  • start: time (start) of creation of the archive
  • time: alias of "start"
  • end: time (end) of creation of the archive
  • command_line: command line which was used to create the archive
  • hostname: hostname of host on which this archive was created
  • username: username of user who created this archive

Keys available only when listing files in an archive:

  • type
  • mode
  • uid
  • gid
  • user
  • group
  • path: path interpreted as text (might be missing non-text characters, see bpath)
  • bpath: verbatim POSIX path, can contain any character except NUL
  • source: link target for links (identical to linktarget)
  • linktarget
  • flags
  • size
  • csize: compressed size
  • dsize: deduplicated size
  • dcsize: deduplicated compressed size
  • num_chunks: number of chunks in this file
  • unique_chunks: number of unique chunks in this file
  • mtime
  • ctime
  • atime
  • isomtime
  • isoctime
  • isoatime
  • blake2b
  • blake2s
  • md5
  • sha1
  • sha224
  • sha256
  • sha384
  • sha3_224
  • sha3_256
  • sha3_384
  • sha3_512
  • sha512
  • xxh64: XXH64 checksum of this file (note: this is NOT a cryptographic hash!)
  • archiveid
  • archivename
  • extra: prepends {source} with " -> " for soft links and " link to " for hard links
  • health: either "healthy" (file ok) or "broken" (if file has all-zero replacement chunks)


$ borg list /path/to/repo
Monday                               Mon, 2016-02-15 19:15:11
repo                                 Mon, 2016-02-15 19:26:54
root-2016-02-15                      Mon, 2016-02-15 19:36:29
newname                              Mon, 2016-02-15 19:50:19

$ borg list /path/to/repo::root-2016-02-15
drwxr-xr-x root   root          0 Mon, 2016-02-15 17:44:27 .
drwxrwxr-x root   root          0 Mon, 2016-02-15 19:04:49 bin
-rwxr-xr-x root   root    1029624 Thu, 2014-11-13 00:08:51 bin/bash
lrwxrwxrwx root   root          0 Fri, 2015-03-27 20:24:26 bin/bzcmp -> bzdiff
-rwxr-xr-x root   root       2140 Fri, 2015-03-27 20:24:22 bin/bzdiff

$ borg list /path/to/repo::root-2016-02-15 --pattern "- bin/ba*"
drwxr-xr-x root   root          0 Mon, 2016-02-15 17:44:27 .
drwxrwxr-x root   root          0 Mon, 2016-02-15 19:04:49 bin
lrwxrwxrwx root   root          0 Fri, 2015-03-27 20:24:26 bin/bzcmp -> bzdiff
-rwxr-xr-x root   root       2140 Fri, 2015-03-27 20:24:22 bin/bzdiff

$ borg list /path/to/repo::archiveA --format="{mode} {user:6} {group:6} {size:8d} {isomtime} {path}{extra}{NEWLINE}"
drwxrwxr-x user   user          0 Sun, 2015-02-01 11:00:00 .
drwxrwxr-x user   user          0 Sun, 2015-02-01 11:00:00 code
drwxrwxr-x user   user          0 Sun, 2015-02-01 11:00:00 code/myproject
-rw-rw-r-- user   user    1416192 Sun, 2015-02-01 11:00:00 code/myproject/file.ext
-rw-rw-r-- user   user    1416192 Sun, 2015-02-01 11:00:00 code/myproject/file.text

$ borg list /path/to/repo/::archiveA --pattern '+ re:\.ext$' --pattern '- re:^.*$'
-rw-rw-r-- user   user    1416192 Sun, 2015-02-01 11:00:00 code/myproject/file.ext

$ borg list /path/to/repo/::archiveA --pattern '+ re:.ext$' --pattern '- re:^.*$'
-rw-rw-r-- user   user    1416192 Sun, 2015-02-01 11:00:00 code/myproject/file.ext
-rw-rw-r-- user   user    1416192 Sun, 2015-02-01 11:00:00 code/myproject/file.text

Borg Diff

borg [common options] diff [options] REPO::ARCHIVE1 ARCHIVE2 [PATH...]


This command finds differences (file contents, user/group/mode) between archives.

A repository location and an archive name must be specified for REPO::ARCHIVE1. ARCHIVE2 is just another archive name in same repository (no repository location allowed).

For archives created with Borg 1.1 or newer diff automatically detects whether the archives are created with the same chunker params. If so, only chunk IDs are compared, which is very fast.

For archives prior to Borg 1.1 chunk contents are compared by default. If you did not create the archives with different chunker params, pass --same-chunker-params. Note that the chunker params changed from Borg 0.xx to 1.0.

For more help on include/exclude patterns, see the borg help patterns command output.


$ borg init -e=none testrepo
$ mkdir testdir
$ cd testdir
$ echo asdf > file1
$ dd if=/dev/urandom bs=1M count=4 > file2
$ touch file3
$ borg create ../testrepo::archive1 .

$ chmod a+x file1
$ echo "something" >> file2
$ borg create ../testrepo::archive2 .

$ echo "testing 123" >> file1
$ rm file3
$ touch file4
$ borg create ../testrepo::archive3 .

$ cd ..
$ borg diff testrepo::archive1 archive2
[-rw-r--r-- -> -rwxr-xr-x] file1
   +135 B    -252 B file2

$ borg diff testrepo::archive2 archive3
    +17 B      -5 B file1
added           0 B file4
removed         0 B file3

$ borg diff testrepo::archive1 archive3
    +17 B      -5 B [-rw-r--r-- -> -rwxr-xr-x] file1
   +135 B    -252 B file2
added           0 B file4
removed         0 B file3

$ borg diff --json-lines testrepo::archive1 archive3
{"path": "file1", "changes": [{"type": "modified", "added": 17, "removed": 5}, {"type": "mode", "old_mode": "-rw-r--r--", "new_mode": "-rwxr-xr-x"}]}
{"path": "file2", "changes": [{"type": "modified", "added": 135, "removed": 252}]}
{"path": "file4", "changes": [{"type": "added", "size": 0}]}
{"path": "file3", "changes": [{"type": "removed", "size": 0}]}

Borg Delete

borg [common options] delete [options] [REPOSITORY_OR_ARCHIVE] [ARCHIVE...]


This command deletes an archive from the repository or the complete repository.

Important: When deleting archives, repository disk space is not freed until you run borg compact.

When you delete a complete repository, the security info and local cache for it (if any) are also deleted. Alternatively, you can delete just the local cache with the --cache-only option, or keep the security info with the --keep-security-info option.

When in doubt, use --dry-run --list to see what would be deleted.

When using --stats, you will get some statistics about how much data was deleted - the "Deleted data" deduplicated size there is most interesting as that is how much your repository will shrink. Please note that the "All archives" stats refer to the state after deletion.

You can delete multiple archives by specifying a shell pattern to match multiple archives using the --glob-archives GLOB option (for more info on these patterns, see borg help patterns).

To avoid accidentally deleting archives, especially when using glob patterns, it might be helpful to use the --dry-run to test out the command without actually making any changes to the repository.


# delete a single backup archive:
$ borg delete /path/to/repo::Monday
# actually free disk space:
$ borg compact /path/to/repo

# delete all archives whose names begin with the machine's hostname followed by "-"
$ borg delete --glob-archives '{hostname}-*' /path/to/repo

# delete all archives whose names contain "-2012-"
$ borg delete --glob-archives '*-2012-*' /path/to/repo

# see what would be deleted if delete was run without --dry-run
$ borg delete --list --dry-run -a '*-May-*' /path/to/repo

# delete the whole repository and the related local cache:
$ borg delete /path/to/repo
You requested to completely DELETE the repository *including* all archives it contains:
repo                                 Mon, 2016-02-15 19:26:54
root-2016-02-15                      Mon, 2016-02-15 19:36:29
newname                              Mon, 2016-02-15 19:50:19
Type 'YES' if you understand this and want to continue: YES

Borg Prune

borg [common options] prune [options] [REPOSITORY]


The prune command prunes a repository by deleting all archives not matching any of the specified retention options.

Important: Repository disk space is not freed until you run borg compact.

This command is normally used by automated backup scripts wanting to keep a certain number of historic backups. This retention policy is commonly referred to as GFS (Grandfather-father-son) backup rotation scheme.

Also, prune automatically removes checkpoint archives (incomplete archives left behind by interrupted backup runs) except if the checkpoint is the latest archive (and thus still needed). Checkpoint archives are not considered when comparing archive counts against the retention limits (--keep-X).

If a prefix is set with -P, then only archives that start with the prefix are considered for deletion and only those archives count towards the totals specified by the rules. Otherwise, all archives in the repository are candidates for deletion! There is no automatic distinction between archives representing different contents. These need to be distinguished by specifying matching prefixes.

If you have multiple sequences of archives with different data sets (e.g. from different machines) in one shared repository, use one prune call per data set that matches only the respective archives using the -P option.

The --keep-within option takes an argument of the form "<int><char>", where char is "H", "d", "w", "m", "y". For example, --keep-within 2d means to keep all archives that were created within the past 48 hours. "1m" is taken to mean "31d". The archives kept with this option do not count towards the totals specified by any other options.

A good procedure is to thin out more and more the older your backups get. As an example, --keep-daily 7 means to keep the latest backup on each day, up to 7 most recent days with backups (days without backups do not count). The rules are applied from secondly to yearly, and backups selected by previous rules do not count towards those of later rules. The time that each backup starts is used for pruning purposes. Dates and times are interpreted in the local timezone, and weeks go from Monday to Sunday. Specifying a negative number of archives to keep means that there is no limit. As of borg 1.2.0, borg will retain the oldest archive if any of the secondly, minutely, hourly, daily, weekly, monthly, or yearly rules was not otherwise able to meet its retention target. This enables the first chronological archive to continue aging until it is replaced by a newer archive that meets the retention criteria.

The --keep-last N option is doing the same as --keep-secondly N (and it will keep the last N archives under the assumption that you do not create more than one backup archive in the same second).

When using --stats, you will get some statistics about how much data was deleted - the "Deleted data" deduplicated size there is most interesting as that is how much your repository will shrink. Please note that the "All archives" stats refer to the state after pruning.


Be careful, prune is a potentially dangerous command, it will remove backup archives.

The default of prune is to apply to all archives in the repository unless you restrict its operation to a subset of the archives using --glob-archives. When using --glob-archives, be careful to choose a good matching pattern - e.g. do not use "foo*" if you do not also want to match "foobar".

It is strongly recommended to always run prune -v --list --dry-run ... first so you will see what it would do without it actually doing anything.

# Keep 7 end of day and 4 additional end of week archives.
# Do a dry-run without actually deleting anything.
$ borg prune -v --list --dry-run --keep-daily=7 --keep-weekly=4 /path/to/repo

# Same as above but only apply to archive names starting with the hostname
# of the machine followed by a "-" character:
$ borg prune -v --list --keep-daily=7 --keep-weekly=4 --glob-archives='{hostname}-*' /path/to/repo
# actually free disk space:
$ borg compact /path/to/repo

# Keep 7 end of day, 4 additional end of week archives,
# and an end of month archive for every month:
$ borg prune -v --list --keep-daily=7 --keep-weekly=4 --keep-monthly=-1 /path/to/repo

# Keep all backups in the last 10 days, 4 additional end of week archives,
# and an end of month archive for every month:
$ borg prune -v --list --keep-within=10d --keep-weekly=4 --keep-monthly=-1 /path/to/repo

There is also a visualized prune example in docs/misc/prune-example.txt:

borg prune visualized

Assume it is 2016-01-01, today's backup has not yet been made, you have
created at least one backup on each day in 2015 except on 2015-12-19 (no
backup made on that day), and you started backing up with borg on

This is what borg prune --keep-daily 14 --keep-monthly 6 --keep-yearly 1
would keep.

Backups kept by the --keep-daily rule are marked by a "d" to the right,
backups kept by the --keep-monthly rule are marked by a "m" to the right,
and backups kept by the --keep-yearly rule are marked by a "y" to the

Calendar view

      January               February               March
Mo Tu We Th Fr Sa Su  Mo Tu We Th Fr Sa Su  Mo Tu We Th Fr Sa Su
          1y 2  3  4                     1                     1
 5  6  7  8  9 10 11   2  3  4  5  6  7  8   2  3  4  5  6  7  8
12 13 14 15 16 17 18   9 10 11 12 13 14 15   9 10 11 12 13 14 15
19 20 21 22 23 24 25  16 17 18 19 20 21 22  16 17 18 19 20 21 22
26 27 28 29 30 31     23 24 25 26 27 28     23 24 25 26 27 28 29
                                            30 31

       April                  May                   June
Mo Tu We Th Fr Sa Su  Mo Tu We Th Fr Sa Su  Mo Tu We Th Fr Sa Su
       1  2  3  4  5               1  2  3   1  2  3  4  5  6  7
 6  7  8  9 10 11 12   4  5  6  7  8  9 10   8  9 10 11 12 13 14
13 14 15 16 17 18 19  11 12 13 14 15 16 17  15 16 17 18 19 20 21
20 21 22 23 24 25 26  18 19 20 21 22 23 24  22 23 24 25 26 27 28
27 28 29 30           25 26 27 28 29 30 31  29 30m

        July                 August              September
Mo Tu We Th Fr Sa Su  Mo Tu We Th Fr Sa Su  Mo Tu We Th Fr Sa Su
       1  2  3  4  5                  1  2      1  2  3  4  5  6
 6  7  8  9 10 11 12   3  4  5  6  7  8  9   7  8  9 10 11 12 13
13 14 15 16 17 18 19  10 11 12 13 14 15 16  14 15 16 17 18 19 20
20 21 22 23 24 25 26  17 18 19 20 21 22 23  21 22 23 24 25 26 27
27 28 29 30 31m       24 25 26 27 28 29 30  28 29 30m

      October               November              December
Mo Tu We Th Fr Sa Su  Mo Tu We Th Fr Sa Su  Mo Tu We Th Fr Sa Su
          1  2  3  4                     1      1  2  3  4  5  6
 5  6  7  8  9 10 11   2  3  4  5  6  7  8   7  8  9 10 11 12 13
12 13 14 15 16 17 18   9 10 11 12 13 14 15  14 15 16 17d18d19 20d
19 20 21 22 23 24 25  16 17 18 19 20 21 22  21d22d23d24d25d26d27d
26 27 28 29 30 31m    23 24 25 26 27 28 29  28d29d30d31d

List view

--keep-daily 14     --keep-monthly 6     --keep-yearly 1
 1. 2015-12-31       (2015-12-31 kept     (2015-12-31 kept
 2. 2015-12-30        by daily rule)       by daily rule)
 3. 2015-12-29       1. 2015-11-30        1. 2015-01-01 (oldest)
 4. 2015-12-28       2. 2015-10-31
 5. 2015-12-27       3. 2015-09-30
 6. 2015-12-26       4. 2015-08-31
 7. 2015-12-25       5. 2015-07-31
 8. 2015-12-24       6. 2015-06-30
 9. 2015-12-23
10. 2015-12-22
11. 2015-12-21
12. 2015-12-20
    (no backup made on 2015-12-19)
13. 2015-12-18
14. 2015-12-17


2015-12-31 is kept due to the --keep-daily 14 rule (because it is applied
first), not due to the --keep-monthly or --keep-yearly rule.

The --keep-yearly 1 rule does not consider the December 31st backup because it
has already been kept due to the daily rule. There are no backups available
from previous years, so the --keep-yearly target of 1 backup is not satisfied.
Because of this, the 2015-01-01 archive (the oldest archive available) is kept.

The --keep-monthly 6 rule keeps Nov, Oct, Sep, Aug, Jul and Jun. December is
not considered for this rule, because that backup was already kept because of
the daily rule.

2015-12-17 is kept to satisfy the --keep-daily 14 rule - because no backup was
made on 2015-12-19. If a backup had been made on that day, it would not keep
the one from 2015-12-17.

We did not include weekly, hourly, minutely or secondly rules to keep this
example simple. They all work in basically the same way.

The weekly rule is easy to understand roughly, but hard to understand in all
details. If interested, read "ISO 8601:2000 standard week-based year".

Borg Compact

borg [common options] compact [options] [REPOSITORY]


This command frees repository space by compacting segments.

Use this regularly to avoid running out of space - you do not need to use this after each borg command though. It is especially useful after deleting archives, because only compaction will really free repository space.

borg compact does not need a key, so it is possible to invoke it from the client or also from the server.

Depending on the amount of segments that need compaction, it may take a while, so consider using the --progress option.

A segment is compacted if the amount of saved space is above the percentage value given by the --threshold option. If omitted, a threshold of 10% is used. When using --verbose, borg will output an estimate of the freed space.

After upgrading borg (server) to 1.2+, you can use borg compact --cleanup-commits to clean up the numerous 17byte commit-only segments that borg 1.1 did not clean up due to a bug. It is enough to do that once per repository. After cleaning up the commits, borg will also do a normal compaction.

See Separate compaction in Additional Notes for more details.


# compact segments and free repo disk space
$ borg compact /path/to/repo

# same as above plus clean up 17byte commit-only segments
$ borg compact --cleanup-commits /path/to/repo

Borg Info

borg [common options] info [options] [REPOSITORY_OR_ARCHIVE]


This command displays detailed information about the specified archive or repository.

Please note that the deduplicated sizes of the individual archives do not add up to the deduplicated size of the repository ("all archives"), because the two are meaning different things:

This archive / deduplicated size = amount of data stored ONLY for this archive = unique chunks of this archive. All archives / deduplicated size = amount of data stored in the repo = all chunks in the repository.

Borg archives can only contain a limited amount of file metadata. The size of an archive relative to this limit depends on a number of factors, mainly the number of files, the lengths of paths and other metadata stored for files. This is shown as utilization of maximum supported archive size.


$ borg info /path/to/repo::2017-06-29T11:00-srv
Archive name: 2017-06-29T11:00-srv
Archive fingerprint: b2f1beac2bd553b34e06358afa45a3c1689320d39163890c5bbbd49125f00fe5
Hostname: myhostname
Username: root
Time (start): Thu, 2017-06-29 11:03:07
Time (end): Thu, 2017-06-29 11:03:13
Duration: 5.66 seconds
Number of files: 17037
Command line: /usr/sbin/borg create /path/to/repo::2017-06-29T11:00-srv /srv
Utilization of max. archive size: 0%
                       Original size      Compressed size    Deduplicated size
This archive:               12.53 GB             12.49 GB              1.62 kB
All archives:              121.82 TB            112.41 TB            215.42 GB

                       Unique chunks         Total chunks
Chunk index:                 1015213            626934122

$ borg info /path/to/repo --last 1
Archive name: 2017-06-29T11:00-srv
Archive fingerprint: b2f1beac2bd553b34e06358afa45a3c1689320d39163890c5bbbd49125f00fe5
Hostname: myhostname
Username: root
Time (start): Thu, 2017-06-29 11:03:07
Time (end): Thu, 2017-06-29 11:03:13
Duration: 5.66 seconds
Number of files: 17037
Command line: /usr/sbin/borg create /path/to/repo::2017-06-29T11:00-srv /srv
Utilization of max. archive size: 0%
                       Original size      Compressed size    Deduplicated size
This archive:               12.53 GB             12.49 GB              1.62 kB
All archives:              121.82 TB            112.41 TB            215.42 GB

                       Unique chunks         Total chunks
Chunk index:                 1015213            626934122

$ borg info /path/to/repo
Repository ID: d857ce5788c51272c61535062e89eac4e8ef5a884ffbe976e0af9d8765dedfa5
Location: /path/to/repo
Encrypted: Yes (repokey)
Cache: /root/.cache/borg/d857ce5788c51272c61535062e89eac4e8ef5a884ffbe976e0af9d8765dedfa5
Security dir: /root/.config/borg/security/d857ce5788c51272c61535062e89eac4e8ef5a884ffbe976e0af9d8765dedfa5
                       Original size      Compressed size    Deduplicated size
All archives:              121.82 TB            112.41 TB            215.42 GB

                       Unique chunks         Total chunks
Chunk index:                 1015213            626934122

Borg Version

borg [common options] version [options] [REPOSITORY]


This command displays the borg client version / borg server version.

If a local repo is given, the client code directly accesses the repository, thus we show the client version also as the server version.

If a remote repo is given (e.g. ssh:), the remote borg is queried and its version is displayed as the server version.


# local repo (client uses 1.4.0 alpha version)
$ borg version /mnt/backup
1.4.0a / 1.4.0a

# remote repo (client uses 1.4.0 alpha, server uses 1.2.7 release)
$ borg version ssh://borg@borgbackup:repo
1.4.0a / 1.2.7

Due to the version tuple format used in borg client/server negotiation, only a simplified version is displayed (as provided by borg.version.format_version).

There is also borg --version to display a potentially more precise client version.

Borg Mount

borg [common options] mount [options] REPOSITORY_OR_ARCHIVE MOUNTPOINT [PATH...]


This command mounts an archive as a FUSE filesystem. This can be useful for browsing an archive or restoring individual files. Unless the --foreground option is given the command will run in the background until the filesystem is umounted.

The command borgfs provides a wrapper for borg mount. This can also be used in fstab entries: /path/to/repo /mnt/point fuse.borgfs defaults,noauto 0 0

To allow a regular user to use fstab entries, add the user option: /path/to/repo /mnt/point fuse.borgfs defaults,noauto,user 0 0

For FUSE configuration and mount options, see the mount.fuse(8) manual page.

Borg's default behavior is to use the archived user and group names of each file and map them to the system's respective user and group ids. Alternatively, using numeric-ids will instead use the archived user and group ids without any mapping.

The uid and gid mount options (implemented by Borg) can be used to override the user and group ids of all files (i.e., borg mount -o uid=1000,gid=1000).

The man page references user_id and group_id mount options (implemented by fuse) which specify the user and group id of the mount owner (aka, the user who does the mounting). It is set automatically by libfuse (or the filesystem if libfuse is not used). However, you should not specify these manually. Unlike the uid and gid mount options which affect all files, user_id and group_id affect the user and group id of the mounted (base) directory.

Additional mount options supported by borg:

  • versions: when used with a repository mount, this gives a merged, versioned view of the files in the archives. EXPERIMENTAL, layout may change in future.
  • allow_damaged_files: by default damaged files (where missing chunks were replaced with runs of zeros by borg check --repair) are not readable and return EIO (I/O error). Set this option to read such files.
  • ignore_permissions: for security reasons the default_permissions mount option is internally enforced by borg. ignore_permissions can be given to not enforce default_permissions.

The BORG_MOUNT_DATA_CACHE_ENTRIES environment variable is meant for advanced users to tweak the performance. It sets the number of cached data chunks; additional memory usage can be up to ~8 MiB times this number. The default is the number of CPU cores.

When the daemonized process receives a signal or crashes, it does not unmount. Unmounting in these cases could cause an active rsync or similar process to unintentionally delete data.

When running in the foreground ^C/SIGINT unmounts cleanly, but other signals or crashes do not.

Borg Umount

borg [common options] umount [options] MOUNTPOINT


This command un-mounts a FUSE filesystem that was mounted with borg mount.

This is a convenience wrapper that just calls the platform-specific shell command - usually this is either umount or fusermount -u.


# Mounting the repository shows all archives.
# Archives are loaded lazily, expect some delay when navigating to an archive
# for the first time.
$ borg mount /path/to/repo /tmp/mymountpoint
$ ls /tmp/mymountpoint
root-2016-02-14 root-2016-02-15
$ borg umount /tmp/mymountpoint

# Mounting a specific archive is possible as well.
$ borg mount /path/to/repo::root-2016-02-15 /tmp/mymountpoint
$ ls /tmp/mymountpoint
bin  boot  etc      home  lib  lib64  lost+found  media  mnt  opt
root  sbin  srv  tmp  usr  var
$ borg umount /tmp/mymountpoint

# The "versions view" merges all archives in the repository
# and provides a versioned view on files.
$ borg mount -o versions /path/to/repo /tmp/mymountpoint
$ ls -l /tmp/mymountpoint/home/user/doc.txt/
total 24
-rw-rw-r-- 1 user group 12357 Aug 26 21:19 doc.cda00bc9.txt
-rw-rw-r-- 1 user group 12204 Aug 26 21:04 doc.fa760f28.txt
$ borg umount /tmp/mymountpoint

# Archive filters are supported.
# These are especially handy for the "versions view",
# which does not support lazy processing of archives.
$ borg mount -o versions --glob-archives '*-my-home' --last 10 /path/to/repo /tmp/mymountpoint

# Exclusion options are supported.
# These can speed up mounting and lower memory needs significantly.
$ borg mount /path/to/repo /tmp/mymountpoint only/that/path
$ borg mount --exclude '...' /path/to/repo /tmp/mymountpoint

# When using BORG_REPO env var, use :: as positional argument:
export BORG_REPO=/path/to/repo
# Mount the whole repo:
borg mount :: /tmp/mymountpoint
# Mount some specific archive:
borg mount ::root-2016-02-15 /tmp/mymountpoint


$ echo '/mnt/backup /tmp/myrepo fuse.borgfs defaults,noauto 0 0' >> /etc/fstab
$ echo '/mnt/backup::root-2016-02-15 /tmp/myarchive fuse.borgfs defaults,noauto 0 0' >> /etc/fstab
$ mount /tmp/myrepo
$ mount /tmp/myarchive
$ ls /tmp/myrepo
root-2016-02-01 root-2016-02-2015
$ ls /tmp/myarchive
bin  boot  etc      home  lib  lib64  lost+found  media  mnt  opt  root  sbin  srv  tmp  usr  var

borgfs will be automatically provided if you used a distribution package or pip to install Borg. Users of the standalone binary will have to manually create a symlink (see Standalone Binary).

Borg Key Change-Passphrase

borg [common options] key change-passphrase [options] [REPOSITORY]


The key files used for repository encryption are optionally passphrase protected. This command can be used to change this passphrase.

Please note that this command only changes the passphrase, but not any secret protected by it (like e.g. encryption/MAC keys or chunker seed). Thus, changing the passphrase after passphrase and borg key got compromised does not protect future (nor past) backups to the same repository.


# Create a key file protected repository
$ borg init --encryption=keyfile -v /path/to/repo
Initializing repository at "/path/to/repo"
Enter new passphrase:
Enter same passphrase again:
Remember your passphrase. Your data will be inaccessible without it.
Key in "/root/.config/borg/keys/mnt_backup" created.
Keep this key safe. Your data will be inaccessible without it.
Synchronizing chunks cache...
Archives: 0, w/ cached Idx: 0, w/ outdated Idx: 0, w/o cached Idx: 0.

# Change key file passphrase
$ borg key change-passphrase -v /path/to/repo
Enter passphrase for key /root/.config/borg/keys/mnt_backup:
Enter new passphrase:
Enter same passphrase again:
Remember your passphrase. Your data will be inaccessible without it.
Key updated

# Import a previously-exported key into the specified
# key file (creating or overwriting the output key)
# (keyfile repositories only)
$ BORG_KEY_FILE=/path/to/output-key borg key import /path/to/repo /path/to/exported

Fully automated using environment variables:

$ BORG_NEW_PASSPHRASE=old borg init -e=repokey repo
# now "old" is the current passphrase.
$ BORG_PASSPHRASE=old BORG_NEW_PASSPHRASE=new borg key change-passphrase repo
# now "new" is the current passphrase.

Borg Key Export

borg [common options] key export [options] [REPOSITORY] [PATH]


If repository encryption is used, the repository is inaccessible without the key. This command allows one to backup this essential key. Note that the backup produced does not include the passphrase itself (i.e. the exported key stays encrypted). In order to regain access to a repository, one needs both the exported key and the original passphrase.

There are three backup formats. The normal backup format is suitable for digital storage as a file. The --paper backup format is optimized for printing and typing in while importing, with per line checks to reduce problems with manual input. The --qr-html creates a printable HTML template with a QR code and a copy of the --paper-formatted key.

For repositories using keyfile encryption the key is saved locally on the system that is capable of doing backups. To guard against loss of this key, the key needs to be backed up independently of the main data backup.

For repositories using the repokey encryption the key is saved in the repository in the config file. A backup is thus not strictly needed, but guards against the repository becoming inaccessible if the file is damaged for some reason.


borg key export /path/to/repo > encrypted-key-backup
borg key export --paper /path/to/repo > encrypted-key-backup.txt
borg key export --qr-html /path/to/repo > encrypted-key-backup.html
# Or pass the output file as an argument instead of redirecting stdout:
borg key export /path/to/repo encrypted-key-backup
borg key export --paper /path/to/repo encrypted-key-backup.txt
borg key export --qr-html /path/to/repo encrypted-key-backup.html

Borg Key Import

borg [common options] key import [options] [REPOSITORY] [PATH]


This command restores a key previously backed up with the export command.

If the --paper option is given, the import will be an interactive process in which each line is checked for plausibility before proceeding to the next line. For this format PATH must not be given.

For repositories using keyfile encryption, the key file which borg key import writes to depends on several factors. If the BORG_KEY_FILE environment variable is set and non-empty, borg key import creates or overwrites that file named by $BORG_KEY_FILE. Otherwise, borg key import searches in the $BORG_KEYS_DIR directory for a key file associated with the repository. If a key file is found in $BORG_KEYS_DIR, borg key import overwrites it; otherwise, borg key import creates a new key file in $BORG_KEYS_DIR.

Borg Upgrade

borg [common options] upgrade [options] [REPOSITORY]


Upgrade an existing, local Borg repository.

When you do not need borg upgrade

Not every change requires that you run borg upgrade.

You do not need to run it when:

  • moving your repository to a different place
  • upgrading to another point release (like 1.0.x to 1.0.y), except when noted otherwise in the changelog
  • upgrading from 1.0.x to 1.1.x, except when noted otherwise in the changelog

Borg 1.x.y upgrades

Archive TAM authentication:

Use borg upgrade --archives-tam REPO to add archive TAMs to all archives that are not TAM authenticated yet. This is a convenient method to just trust all archives present - if an archive does not have TAM authentication yet, a TAM will be added. Archives created by old borg versions < 1.0.9 do not have TAMs. Archives created by newer borg version should have TAMs already. If you have a high risk environment, you should not just run this, but first verify that the archives are authentic and not malicious (== have good content, have a good timestamp). Borg 1.2.5+ needs all archives to be TAM authenticated for safety reasons.

This upgrade needs to be done once per repository.

Manifest TAM authentication:

Use borg upgrade --tam REPO to require manifest authentication introduced with Borg 1.0.9 to address security issues. This means that modifying the repository after doing this with a version prior to 1.0.9 will raise a validation error, so only perform this upgrade after updating all clients using the repository to 1.0.9 or newer.

This upgrade should be done on each client for safety reasons.

If a repository is accidentally modified with a pre-1.0.9 client after this upgrade, use borg upgrade --tam --force REPO to remedy it.

If you routinely do this you might not want to enable this upgrade (which will leave you exposed to the security issue). You can reverse the upgrade by issuing borg upgrade --disable-tam REPO.

See https://borgbackup.readthedocs.io/en/stable/changes.html#pre-1-0-9-manifest-spoofing-vulnerability for details.

Attic and Borg 0.xx to Borg 1.x

This currently supports converting an Attic repository to Borg and also helps with converting Borg 0.xx to 1.0.

Currently, only LOCAL repositories can be upgraded (issue #465).

Please note that borg create (since 1.0.0) uses bigger chunks by default than old borg or attic did, so the new chunks won't deduplicate with the old chunks in the upgraded repository. See --chunker-params option of borg create and borg recreate.

borg upgrade will change the magic strings in the repository's segments to match the new Borg magic strings. The keyfiles found in $ATTIC_KEYS_DIR or ~/.attic/keys/ will also be converted and copied to $BORG_KEYS_DIR or ~/.config/borg/keys.

The cache files are converted, from $ATTIC_CACHE_DIR or ~/.cache/attic to $BORG_CACHE_DIR or ~/.cache/borg, but the cache layout between Borg and Attic changed, so it is possible the first backup after the conversion takes longer than expected due to the cache resync.

Upgrade should be able to resume if interrupted, although it will still iterate over all segments. If you want to start from scratch, use borg delete over the copied repository to make sure the cache files are also removed:

borg delete borg

Unless --inplace is specified, the upgrade process first creates a backup copy of the repository, in REPOSITORY.before-upgrade-DATETIME, using hardlinks. This requires that the repository and its parent directory reside on same filesystem so the hardlink copy can work. This takes longer than in place upgrades, but is much safer and gives progress information (as opposed to cp -al). Once you are satisfied with the conversion, you can safely destroy the backup copy.

WARNING: Running the upgrade in place will make the current copy unusable with older version, with no way of going back to previous versions. This can PERMANENTLY DAMAGE YOUR REPOSITORY!  Attic CAN NOT READ BORG REPOSITORIES, as the magic strings have changed. You have been warned.


# Upgrade the borg repository to the most recent version.
$ borg upgrade -v /path/to/repo
making a hardlink copy in /path/to/repo.before-upgrade-2016-02-15-20:51:55
opening attic repository with borg and converting
no key file found for repository
converting repo index /path/to/repo/index.0
converting 1 segments...
converting borg 0.xx to borg current
no key file found for repository

Upgrading a passphrase encrypted attic repo

attic offered a "passphrase" encryption mode, but this was removed in borg 1.0 and replaced by the "repokey" mode (which stores the passphrase-protected encryption key into the repository config).

Thus, to upgrade a "passphrase" attic repo to a "repokey" borg repo, 2 steps are needed, in this order:

  • borg upgrade repo
  • borg key migrate-to-repokey repo

Borg Recreate

borg [common options] recreate [options] [REPOSITORY_OR_ARCHIVE] [PATH...]


Recreate the contents of existing archives.

recreate is a potentially dangerous function and might lead to data loss (if used wrongly). BE VERY CAREFUL!

Important: Repository disk space is not freed until you run borg compact.

--exclude, --exclude-from, --exclude-if-present, --keep-exclude-tags and PATH have the exact same semantics as in "borg create", but they only check for files in the archives and not in the local file system. If PATHs are specified, the resulting archives will only contain files from these PATHs.

Note that all paths in an archive are relative, therefore absolute patterns/paths will not match (--exclude, --exclude-from, PATHs).

--recompress allows one to change the compression of existing data in archives. Due to how Borg stores compressed size information this might display incorrect information for archives that were not recreated at the same time. There is no risk of data loss by this.

--chunker-params will re-chunk all files in the archive, this can be used to have upgraded Borg 0.xx or Attic archives deduplicate with Borg 1.x archives.

USE WITH CAUTION. Depending on the PATHs and patterns given, recreate can be used to permanently delete files from archives. When in doubt, use --dry-run --verbose --list to see how patterns/PATHS are interpreted. See Item flags in borg create for details.

The archive being recreated is only removed after the operation completes. The archive that is built during the operation exists at the same time at "<ARCHIVE>.recreate". The new archive will have a different archive ID.

With --target the original archive is not replaced, instead a new archive is created.

When rechunking (or recompressing), space usage can be substantial - expect at least the entire deduplicated size of the archives using the previous chunker (or compression) params.

If you recently ran borg check --repair and it had to fix lost chunks with all-zero replacement chunks, please first run another backup for the same data and re-run borg check --repair afterwards to heal any archives that had lost chunks which are still generated from the input data.

Important: running borg recreate to re-chunk will remove the chunks_healthy metadata of all items with replacement chunks, so healing will not be possible any more after re-chunking (it is also unlikely it would ever work: due to the change of chunking parameters, the missing chunk likely will never be seen again even if you still have the data that produced it).


# Make old (Attic / Borg 0.xx) archives deduplicate with Borg 1.x archives.
# Archives created with Borg 1.1+ and the default chunker params are skipped
# (archive ID stays the same).
$ borg recreate /mnt/backup --chunker-params default --progress

# Create a backup with little but fast compression
$ borg create /mnt/backup::archive /some/files --compression lz4
# Then compress it - this might take longer, but the backup has already completed,
# so no inconsistencies from a long-running backup job.
$ borg recreate /mnt/backup::archive --recompress --compression zlib,9

# Remove unwanted files from all archives in a repository.
# Note the relative path for the --exclude option - archives only contain relative paths.
$ borg recreate /mnt/backup --exclude home/icke/Pictures/drunk_photos

# Change archive comment
$ borg create --comment "This is a comment" /mnt/backup::archivename ~
$ borg info /mnt/backup::archivename
Name: archivename
Fingerprint: ...
Comment: This is a comment
$ borg recreate --comment "This is a better comment" /mnt/backup::archivename
$ borg info /mnt/backup::archivename
Name: archivename
Fingerprint: ...
Comment: This is a better comment

Borg Import-Tar

borg [common options] import-tar [options] ARCHIVE TARFILE


This command creates a backup archive from a tarball.

When giving '-' as path, Borg will read a tar stream from standard input.

By default (--tar-filter=auto) Borg will detect whether the file is compressed based on its file extension and pipe the file through an appropriate filter:

  • .tar.gz or .tgz: gzip -d
  • .tar.bz2 or .tbz: bzip2 -d
  • .tar.xz or .txz: xz -d
  • .tar.zstd or .tar.zst: zstd -d
  • .tar.lz4: lz4 -d

Alternatively, a --tar-filter program may be explicitly specified. It should read compressed data from stdin and output an uncompressed tar stream on stdout.

Most documentation of borg create applies. Note that this command does not support excluding files.

import-tar is a lossy conversion: BSD flags, ACLs, extended attributes (xattrs), atime and ctime are not exported. Timestamp resolution is limited to whole seconds, not the nanosecond resolution otherwise supported by Borg.

A --sparse option (as found in borg create) is not supported.

import-tar reads POSIX.1-1988 (ustar), POSIX.1-2001 (pax), GNU tar, UNIX V7 tar and SunOS tar with extended attributes.

To import multiple tarballs into a single archive, they can be simply concatenated (e.g. using "cat") into a single file, and imported with an --ignore-zeros option to skip through the stop markers between them.

Borg Export-Tar

borg [common options] export-tar [options] ARCHIVE FILE [PATH...]


This command creates a tarball from an archive.

When giving '-' as the output FILE, Borg will write a tar stream to standard output.

By default (--tar-filter=auto) Borg will detect whether the FILE should be compressed based on its file extension and pipe the tarball through an appropriate filter before writing it to FILE:

  • .tar.gz or .tgz: gzip
  • .tar.bz2 or .tbz: bzip2
  • .tar.xz or .txz: xz
  • .tar.zstd or .tar.zst: zstd
  • .tar.lz4: lz4

Alternatively, a --tar-filter program may be explicitly specified. It should read the uncompressed tar stream from stdin and write a compressed/filtered tar stream to stdout.

The generated tarball uses the GNU tar format.

export-tar is a lossy conversion: BSD flags, ACLs, extended attributes (xattrs), atime and ctime are not exported. Timestamp resolution is limited to whole seconds, not the nanosecond resolution otherwise supported by Borg.

A --sparse option (as found in borg extract) is not supported.

By default the entire archive is extracted but a subset of files and directories can be selected by passing a list of PATHs as arguments. The file selection can further be restricted by using the --exclude option.

For more help on include/exclude patterns, see the borg help patterns command output.

--progress can be slower than no progress display, since it makes one additional pass over the archive metadata.


# export as uncompressed tar
$ borg export-tar /path/to/repo::Monday Monday.tar

# exclude some types, compress using gzip
$ borg export-tar /path/to/repo::Monday Monday.tar.gz --exclude '*.so'

# use higher compression level with gzip
$ borg export-tar --tar-filter="gzip -9" testrepo::linux Monday.tar.gz

# export a tar, but instead of storing it on disk,
# upload it to a remote site using curl.
$ borg export-tar /path/to/repo::Monday - | curl --data-binary @- https://somewhere/to/POST

# remote extraction via "tarpipe"
$ borg export-tar /path/to/repo::Monday - | ssh somewhere "cd extracted; tar x"

Borg Serve

borg [common options] serve [options]


This command starts a repository server process. This command is usually not used manually.


borg serve has special support for ssh forced commands (see authorized_keys example below): if the environment variable SSH_ORIGINAL_COMMAND is set it will ignore some options given on the command line and use the values from the variable instead. This only applies to a carefully controlled allowlist of safe options. This list currently contains:

  • Options that control the log level and debug topics printed such as --verbose, --info, --debug, --debug-topic, etc.
  • --lock-wait to allow the client to control how long to wait before giving up and aborting the operation when another process is holding a lock.

Environment variables (such as BORG_XXX) contained in the original command sent by the client are not interpreted, but ignored. If BORG_XXX environment variables should be set on the borg serve side, then these must be set in system-specific locations like /etc/environment or in the forced command itself (example below).

# Allow an SSH keypair to only run borg, and only have access to /path/to/repo.
# Use key options to disable unneeded and potentially dangerous SSH functionality.
# This will help to secure an automated remote backup system.
$ cat ~/.ssh/authorized_keys
command="borg serve --restrict-to-path /path/to/repo",restrict ssh-rsa AAAAB3[...]

# Set a BORG_XXX environment variable on the "borg serve" side
$ cat ~/.ssh/authorized_keys
command="export BORG_XXX=value; borg serve [...]",restrict ssh-rsa [...]

The examples above use the restrict directive. This does automatically block potential dangerous ssh features, even when they are added in a future update. Thus, this option should be preferred.

If you're using openssh-server < 7.2, however, you have to explicitly specify the ssh features to restrict and cannot simply use the restrict option as it has been introduced in v7.2. We recommend to use no-port-forwarding,no-X11-forwarding,no-pty,no-agent-forwarding,no-user-rc in this case.

Details about sshd usage: sshd(8)

SSH Configuration

borg serve's pipes (stdin/stdout/stderr) are connected to the sshd process on the server side. In the event that the SSH connection between borg serve and the client is disconnected or stuck abnormally (for example, due to a network outage), it can take a long time for sshd to notice the client is disconnected. In the meantime, sshd continues running, and as a result so does the borg serve process holding the lock on the repository. This can cause subsequent borg operations on the remote repository to fail with the error: Failed to create/acquire the lock.

In order to avoid this, it is recommended to perform the following additional SSH configuration:

Either in the client side's ~/.ssh/config file, or in the client's /etc/ssh/ssh_config file:

Host backupserver
        ServerAliveInterval 10
        ServerAliveCountMax 30

Replacing backupserver with the hostname, FQDN or IP address of the borg server.

This will cause the client to send a keepalive to the server every 10 seconds. If 30 consecutive keepalives are sent without a response (a time of 300 seconds), the ssh client process will be terminated, causing the borg process to terminate gracefully.

On the server side's sshd configuration file (typically /etc/ssh/sshd_config):

ClientAliveInterval 10
ClientAliveCountMax 30

This will cause the server to send a keep alive to the client every 10 seconds. If 30 consecutive keepalives are sent without a response (a time of 300 seconds), the server's sshd process will be terminated, causing the borg serve process to terminate gracefully and release the lock on the repository.

If you then run borg commands with --lock-wait 600, this gives sufficient time for the borg serve processes to terminate after the SSH connection is torn down after the 300 second wait for the keepalives to fail.

You may, of course, modify the timeout values demonstrated above to values that suit your environment and use case.

Borg Config

borg [common options] config [options] [REPOSITORY] [NAME] [VALUE]


This command gets and sets options in a local repository or cache config file. For security reasons, this command only works on local repositories.

To delete a config value entirely, use --delete. To list the values of the configuration file or the default values, use --list.  To get and existing key, pass only the key name. To set a key, pass both the key name and the new value. Keys can be specified in the format "section.name" or simply "name"; the section will default to "repository" and "cache" for the repo and cache configs, respectively.

By default, borg config manipulates the repository config file. Using --cache edits the repository cache's config file instead.


The repository & cache config files are some of the only directly manipulable parts of a repository that aren't versioned or backed up, so be careful when making changes!


# find cache directory
$ cd ~/.cache/borg/$(borg config /path/to/repo id)

# reserve some space
$ borg config /path/to/repo additional_free_space 2G

# make a repo append-only
$ borg config /path/to/repo append_only 1

Borg with-Lock

borg [common options] with-lock [options] REPOSITORY COMMAND [ARGS...]


This command runs a user-specified command while locking the repository. For example:

$ borg with-lock /mnt/borgrepo rsync -av /mnt/borgrepo /somewhere/else/borgrepo

It will first try to acquire the lock (make sure that no other operation is running in the repo), then execute the given command as a subprocess and wait for its termination, release the lock and return the user command's return code as borg's return code.


If you copy a repository with the lock held, the lock will be present in the copy. Thus, before using borg on the copy from a different host, you need to use "borg break-lock" on the copied repository, because Borg is cautious and does not automatically remove stale locks made by a different host.

Borg Break-Lock

borg [common options] break-lock [options] [REPOSITORY]


This command breaks the repository and cache locks. Please use carefully and only while no borg process (on any machine) is trying to access the Cache or the Repository.

Borg Benchmark Crud

borg [common options] benchmark crud [options] REPOSITORY PATH


This command benchmarks borg CRUD (create, read, update, delete) operations.

It creates input data below the given PATH and backups this data into the given REPO. The REPO must already exist (it could be a fresh empty repo or an existing repo, the command will create / read / update / delete some archives named borg-benchmark-crud* there.

Make sure you have free space there, you'll need about 1GB each (+ overhead).

If your repository is encrypted and borg needs a passphrase to unlock the key, use:

BORG_PASSPHRASE=mysecret borg benchmark crud REPO PATH

Measurements are done with different input file sizes and counts. The file contents are very artificial (either all zero or all random), thus the measurement results do not necessarily reflect performance with real data. Also, due to the kind of content used, no compression is used in these benchmarks.

C- == borg create (1st archive creation, no compression, do not use files cache)

C-Z- == all-zero files. full dedup, this is primarily measuring reader/chunker/hasher. C-R- == random files. no dedup, measuring throughput through all processing stages.

R- == borg extract (extract archive, dry-run, do everything, but do not write files to disk)

R-Z- == all zero files. Measuring heavily duplicated files. R-R- == random files. No duplication here, measuring throughput through all processing stages, except writing to disk.

U- == borg create (2nd archive creation of unchanged input files, measure files cache speed)

The throughput value is kind of virtual here, it does not actually read the file. U-Z- == needs to check the 2 all-zero chunks' existence in the repo. U-R- == needs to check existence of a lot of different chunks in the repo.

D- == borg delete archive (delete last remaining archive, measure deletion + compaction)

D-Z- == few chunks to delete / few segments to compact/remove. D-R- == many chunks to delete / many segments to compact/remove.

Please note that there might be quite some variance in these measurements. Try multiple measurements and having a otherwise idle machine (and network, if you use it).

Miscellaneous Help

borg help patterns

The path/filenames used as input for the pattern matching start from the currently active recursion root. You usually give the recursion root(s) when invoking borg and these can be either relative or absolute paths.

Starting with Borg 1.2, paths that are matched against patterns always appear relative. If you give /absolute/ as root, the paths going into the matcher will start with absolute/. If you give ../../relative as root, the paths will be normalized as relative/.

A directory exclusion pattern can end either with or without a slash ('/'). If it ends with a slash, such as some/path/, the directory will be included but not its content. If it does not end with a slash, such as some/path, both the directory and content will be excluded.

Borg supports different pattern styles. To define a non-default style for a specific pattern, prefix it with two characters followed by a colon ':' (i.e. fm:path/*, sh:path/**).

Fnmatch, selector fm:

This is the default style for --exclude and --exclude-from. These patterns use a variant of shell pattern syntax, with '*' matching any number of characters, '?' matching any single character, '[...]' matching any single character specified, including ranges, and '[!...]' matching any character not specified. For the purpose of these patterns, the path separator (backslash for Windows and '/' on other systems) is not treated specially. Wrap meta-characters in brackets for a literal match (i.e. [?] to match the literal character ?). For a path to match a pattern, the full path must match, or it must match from the start of the full path to just before a path separator. Except for the root path, paths will never end in the path separator when matching is attempted.  Thus, if a given pattern ends in a path separator, a '*' is appended before matching is attempted. A leading path separator is always removed.

Shell-style patterns, selector sh:

This is the default style for --pattern and --patterns-from. Like fnmatch patterns these are similar to shell patterns. The difference is that the pattern may include **/ for matching zero or more directory levels, * for matching zero or more arbitrary characters with the exception of any path separator. A leading path separator is always removed.

Regular expressions, selector re:

Regular expressions similar to those found in Perl are supported. Unlike shell patterns regular expressions are not required to match the full path and any substring match is sufficient. It is strongly recommended to anchor patterns to the start ('^'), to the end ('$') or both. Path separators (backslash for Windows and '/' on other systems) in paths are always normalized to a forward slash ('/') before applying a pattern. The regular expression syntax is described in the Python documentation for the re module.

Path prefix, selector pp:

This pattern style is useful to match whole sub-directories. The pattern pp:root/somedir matches root/somedir and everything therein. A leading path separator is always removed.

Path full-match, selector pf:

This pattern style is (only) useful to match full paths. This is kind of a pseudo pattern as it can not have any variable or unspecified parts - the full path must be given. pf:root/file.ext matches root/file.ext only. A leading path separator is always removed.

Implementation note: this is implemented via very time-efficient O(1) hashtable lookups (this means you can have huge amounts of such patterns without impacting performance much). Due to that, this kind of pattern does not respect any context or order. If you use such a pattern to include a file, it will always be included (if the directory recursion encounters it). Other include/exclude patterns that would normally match will be ignored. Same logic applies for exclude.


re:, sh: and fm: patterns are all implemented on top of the Python SRE engine. It is very easy to formulate patterns for each of these types which requires an inordinate amount of time to match paths. If untrusted users are able to supply patterns, ensure they cannot supply re: patterns. Further, ensure that sh: and fm: patterns only contain a handful of wildcards at most.

Exclusions can be passed via the command line option --exclude. When used from within a shell, the patterns should be quoted to protect them from expansion.

The --exclude-from option permits loading exclusion patterns from a text file with one pattern per line. Lines empty or starting with the number sign ('#') after removing whitespace on both ends are ignored. The optional style selector prefix is also supported for patterns loaded from a file. Due to whitespace removal, paths with whitespace at the beginning or end can only be excluded using regular expressions.

To test your exclusion patterns without performing an actual backup you can run borg create --list --dry-run ....


# Exclude '/home/user/file.o' but not '/home/user/file.odt':
$ borg create -e '*.o' backup /

# Exclude '/home/user/junk' and '/home/user/subdir/junk' but
# not '/home/user/importantjunk' or '/etc/junk':
$ borg create -e 'home/*/junk' backup /

# Exclude the contents of '/home/user/cache' but not the directory itself:
$ borg create -e home/user/cache/ backup /

# The file '/home/user/cache/important' is *not* backed up:
$ borg create -e home/user/cache/ backup / /home/user/cache/important

# The contents of directories in '/home' are not backed up when their name
# ends in '.tmp'
$ borg create --exclude 're:^home/[^/]+\.tmp/' backup /

# Load exclusions from file
$ cat >exclude.txt <<EOF
# Comment line
# Example with spaces, no need to escape as it is processed by borg
some file with spaces.txt
$ borg create --exclude-from exclude.txt backup /

A more general and easier to use way to define filename matching patterns exists with the --pattern and --patterns-from options. Using these, you may specify the backup roots, default pattern styles and patterns for inclusion and exclusion.

Root path prefix R

A recursion root path starts with the prefix R, followed by a path (a plain path, not a file pattern). Use this prefix to have the root paths in the patterns file rather than as command line arguments.

Pattern style prefix P

To change the default pattern style, use the P prefix, followed by the pattern style abbreviation (fm, pf, pp, re, sh). All patterns following this line will use this style until another style is specified.

Exclude pattern prefix -

Use the prefix -, followed by a pattern, to define an exclusion. This has the same effect as the --exclude option.

Exclude no-recurse pattern prefix !

Use the prefix !, followed by a pattern, to define an exclusion that does not recurse into subdirectories. This saves time, but prevents include patterns to match any files in subdirectories.

Include pattern prefix +

Use the prefix +, followed by a pattern, to define inclusions. This is useful to include paths that are covered in an exclude pattern and would otherwise not be backed up.


Via --pattern or --patterns-from you can define BOTH inclusion and exclusion of files using pattern prefixes + and -. With --exclude and --exclude-from ONLY excludes are defined.

The first matching pattern is used, so if an include pattern matches before an exclude pattern, the file is backed up. Note that a no-recurse exclude stops examination of subdirectories so that potential includes will not match - use normal excludes for such use cases.


# Define the recursion root
R /
# Exclude all iso files in any directory
- **/*.iso
# Explicitly include all inside etc and root
+ etc/**
+ root/**
# Exclude a specific directory under each user's home directories
- home/*/.cache
# Explicitly include everything in /home
+ home/**
# Explicitly exclude some directories without recursing into them
! re:^(dev|proc|run|sys|tmp)
# Exclude all other files and directories
# that are not specifically included earlier.
- **

It's possible that a sub-directory/file is matched while parent directories are not. In that case, parent directories are not backed up thus their user, group, permission, etc. can not be restored.

Note that the default pattern style for --pattern and --patterns-from is shell style (sh:), so those patterns behave similar to rsync include/exclude patterns. The pattern style can be set via the P prefix.

Patterns (--pattern) and excludes (--exclude) from the command line are considered first (in the order of appearance). Then patterns from --patterns-from are added. Exclusion patterns from --exclude-from files are appended last.


# backup pics, but not the ones from 2018, except the good ones:
# note: using = is essential to avoid cmdline argument parsing issues.
borg create --pattern=+pics/2018/good --pattern=-pics/2018 repo::arch pics

# use a file with patterns:
borg create --patterns-from patterns.lst repo::arch

The patterns.lst file could look like that:

# "sh:" pattern style is the default, so the following line is not needed:
P sh
R /
# can be rebuild
- home/*/.cache
# they're downloads for a reason
- home/*/Downloads
# susan is a nice person
# include susans home
+ home/susan
# also back up this exact file
+ pf:home/bobby/specialfile.txt
# don't backup the other home directories
- home/*
# don't even look in /proc
! proc

You can specify recursion roots either on the command line or in a patternfile:

# these two commands do the same thing
borg create --exclude home/bobby/junk repo::arch /home/bobby /home/susan
borg create --patterns-from patternfile.lst repo::arch

The patternfile:

# note that excludes use fm: by default and patternfiles use sh: by default.
# therefore, we need to specify fm: to have the same exact behavior.
P fm
R /home/bobby
R /home/susan

- home/bobby/junk

This allows you to share the same patterns between multiple repositories without needing to specify them on the command line.

borg help placeholders

Repository (or Archive) URLs, --prefix, --glob-archives, --comment and --remote-path values support these placeholders:


The (short) hostname of the machine.


The full name of the machine.


The full name of the machine in reverse domain name notation.


The current local date and time, by default in ISO-8601 format. You can also supply your own format string, e.g. {now:%Y-%m-%d_%H:%M:%S}


The current UTC date and time, by default in ISO-8601 format. You can also supply your own format string, e.g. {utcnow:%Y-%m-%d_%H:%M:%S}


The user name (or UID, if no name is available) of the user running borg.


The current process ID.


The version of borg, e.g.: 1.0.8rc1


The version of borg, only the major version, e.g.: 1


The version of borg, only major and minor version, e.g.: 1.0


The version of borg, only major, minor and patch version, e.g.: 1.0.8

If literal curly braces need to be used, double them for escaping:

borg create /path/to/repo::{{literal_text}}


borg create /path/to/repo::{hostname}-{user}-{utcnow} ...
borg create /path/to/repo::{hostname}-{now:%Y-%m-%d_%H:%M:%S} ...
borg prune --glob-archives '{hostname}-*' ...

systemd uses a difficult, non-standard syntax for command lines in unit files (refer to the systemd.unit(5) manual page).

When invoking borg from unit files, pay particular attention to escaping, especially when using the now/utcnow placeholders, since systemd performs its own %-based variable replacement even in quoted text. To avoid interference from systemd, double all percent signs ({hostname}-{now:%Y-%m-%d_%H:%M:%S} becomes {hostname}-{now:%%Y-%%m-%%d_%%H:%%M:%%S}).

borg help compression

It is no problem to mix different compression methods in one repo, deduplication is done on the source data chunks (not on the compressed or encrypted data).

If some specific chunk was once compressed and stored into the repo, creating another backup that also uses this chunk will not change the stored chunk. So if you use different compression specs for the backups, whichever stores a chunk first determines its compression. See also borg recreate.

Compression is lz4 by default. If you want something else, you have to specify what you want.

Valid compression specifiers are:


Do not compress.


Use lz4 compression. Very high speed, very low compression. (default)


Use zstd ("zstandard") compression, a modern wide-range algorithm. If you do not explicitly give the compression level L (ranging from 1 to 22), it will use level 3. Archives compressed with zstd are not compatible with borg < 1.1.4.


Use zlib ("gz") compression. Medium speed, medium compression. If you do not explicitly give the compression level L (ranging from 0 to 9), it will use level 6. Giving level 0 (means "no compression", but still has zlib protocol overhead) is usually pointless, you better use "none" compression.


Use lzma ("xz") compression. Low speed, high compression. If you do not explicitly give the compression level L (ranging from 0 to 9), it will use level 6. Giving levels above 6 is pointless and counterproductive because it does not compress better due to the buffer size used by borg - but it wastes lots of CPU cycles and RAM.


Use a built-in heuristic to decide per chunk whether to compress or not. The heuristic tries with lz4 whether the data is compressible. For incompressible data, it will not use compression (uses "none"). For compressible data, it uses the given C[,L] compression - with C[,L] being any valid compression specifier.


Use compressed-size obfuscation to make fingerprinting attacks based on the observable stored chunk size more difficult. Note:

  • You must combine this with encryption, or it won't make any sense.
  • Your repo size will be bigger, of course.
  • A chunk is limited by the constant MAX_DATA_SIZE (cur. ~20MiB).

The SPEC value determines how the size obfuscation works:

Relative random reciprocal size variation (multiplicative)

Size will increase by a factor, relative to the compressed data size. Smaller factors are used often, larger factors rarely.

Available factors:

1:     0.01 ..        100
2:     0.1  ..      1,000
3:     1    ..     10,000
4:    10    ..    100,000
5:   100    ..  1,000,000
6: 1,000    .. 10,000,000

Example probabilities for SPEC 1:

90   %  0.01 ..   0.1
 9   %  0.1  ..   1
 0.9 %  1    ..  10
 0.09% 10    .. 100

Randomly sized padding up to the given size (additive)

110: 1kiB (2 ^ (SPEC - 100))
120: 1MiB
123: 8MiB (max.)


borg create --compression lz4 REPO::ARCHIVE data
borg create --compression zstd REPO::ARCHIVE data
borg create --compression zstd,10 REPO::ARCHIVE data
borg create --compression zlib REPO::ARCHIVE data
borg create --compression zlib,1 REPO::ARCHIVE data
borg create --compression auto,lzma,6 REPO::ARCHIVE data
borg create --compression auto,lzma ...
borg create --compression obfuscate,110,none ...
borg create --compression obfuscate,3,auto,zstd,10 ...
borg create --compression obfuscate,2,zstd,6 ...

Debugging Facilities

There is a borg debug command that has some subcommands which are all not intended for normal use and potentially very dangerous if used incorrectly.

For example, borg debug put-obj and borg debug delete-obj will only do what their name suggests: put objects into repo / delete objects from repo.

Please note:

They exist to improve debugging capabilities without direct system access, e.g. in case you ever run into some severe malfunction. Use them only if you know what you are doing or if a trusted Borg developer tells you what to do.

Borg has a --debug-topic TOPIC option to enable specific debugging messages. Topics are generally not documented.

A --debug-profile FILE option exists which writes a profile of the main program's execution to a file. The format of these files is not directly compatible with the Python profiling tools, since these use the "marshal" format, which is not intended to be secure (quoting the Python docs: "Never unmarshal data received from an untrusted or unauthenticated source.").

The borg debug profile-convert command can be used to take a Borg profile and convert it to a profile file that is compatible with the Python tools.

Additionally, if the filename specified for --debug-profile ends with ".pyprof" a Python compatible profile is generated. This is only intended for local use by developers.

Additional Notes

Here are misc. notes about topics that are maybe not covered in enough detail in the usage section.


The chunker params influence how input files are cut into pieces (chunks) which are then considered for deduplication. They also have a big impact on resource usage (RAM and disk space) as the amount of resources needed is (also) determined by the total amount of chunks in the repository (see Indexes / Caches memory usage for details).

--chunker-params=buzhash,10,23,16,4095 results in a fine-grained deduplication and creates a big amount of chunks and thus uses a lot of resources to manage them. This is good for relatively small data volumes and if the machine has a good amount of free RAM and disk space.

--chunker-params=buzhash,19,23,21,4095 (default) results in a coarse-grained deduplication and creates a much smaller amount of chunks and thus uses less resources. This is good for relatively big data volumes and if the machine has a relatively low amount of free RAM and disk space.

--chunker-params=fixed,4194304 results in fixed 4MiB sized block deduplication and is more efficient than the previous example when used for for block devices (like disks, partitions, LVM LVs) or raw disk image files.

--chunker-params=fixed,4096,512 results in fixed 4kiB sized blocks, but the first header block will only be 512B long. This might be useful to dedup files with 1 header + N fixed size data blocks. Be careful to not produce a too big amount of chunks (like using small block size for huge files).

If you already have made some archives in a repository and you then change chunker params, this of course impacts deduplication as the chunks will be cut differently.

In the worst case (all files are big and were touched in between backups), this will store all content into the repository again.

Usually, it is not that bad though:

  • usually most files are not touched, so it will just re-use the old chunks it already has in the repo
  • files smaller than the (both old and new) minimum chunksize result in only one chunk anyway, so the resulting chunks are same and deduplication will apply

If you switch chunker params to save resources for an existing repo that already has some backup archives, you will see an increasing effect over time, when more and more files have been touched and stored again using the bigger chunksize and all references to the smaller older chunks have been removed (by deleting / pruning archives).

If you want to see an immediate big effect on resource usage, you better start a new repository when changing chunker params.

For more details, see Chunks.

--noatime / --noctime

You can use these borg create options to not store the respective timestamp into the archive, in case you do not really need it.

Besides saving a little space for the not archived timestamp, it might also affect metadata stream deduplication: if only this timestamp changes between backups and is stored into the metadata stream, the metadata stream chunks won't deduplicate just because of that.

--nobsdflags / --noflags

You can use this to not query and store (or not extract and set) flags - in case you don't need them or if they are broken somehow for your fs.

On Linux, dealing with the flags needs some additional syscalls. Especially when dealing with lots of small files, this causes a noticeable overhead, so you can use this option also for speeding up operations.


borg uses a safe default umask of 077 (that means the files borg creates have only permissions for owner, but no permissions for group and others) - so there should rarely be a need to change the default behaviour.

This option only affects the process to which it is given. Thus, when you run borg in client/server mode and you want to change the behaviour on the server side, you need to use borg serve --umask=XXX ... as a ssh forced command in authorized_keys. The --umask value given on the client side is not transferred to the server side.

Also, if you choose to use the --umask option, always be consistent and use the same umask value so you do not create a mixup of permissions in a borg repository or with other files borg creates.


The --read-special option is special - you do not want to use it for normal full-filesystem backups, but rather after carefully picking some targets for it.

The option --read-special triggers special treatment for block and char device files as well as FIFOs. Instead of storing them as such a device (or FIFO), they will get opened, their content will be read and in the backup archive they will show up like a regular file.

Symlinks will also get special treatment if (and only if) they point to such a special file: instead of storing them as a symlink, the target special file will get processed as described above.

One intended use case of this is backing up the contents of one or multiple block devices, like e.g. LVM snapshots or inactive LVs or disk partitions.

You need to be careful about what you include when using --read-special, e.g. if you include /dev/zero, your backup will never terminate.

Restoring such files' content is currently only supported one at a time via --stdout option (and you have to redirect stdout to where ever it shall go, maybe directly into an existing device file of your choice or indirectly via dd).

To some extent, mounting a backup archive with the backups of special files via borg mount and then loop-mounting the image files from inside the mount point will work. If you plan to access a lot of data in there, it likely will scale and perform better if you do not work via the FUSE mount.


Imagine you have made some snapshots of logical volumes (LVs) you want to backup.


For some scenarios, this is a good method to get "crash-like" consistency (I call it crash-like because it is the same as you would get if you just hit the reset button or your machine would abruptly and completely crash). This is better than no consistency at all and a good method for some use cases, but likely not good enough if you have databases running.

Then you create a backup archive of all these snapshots. The backup process will see a "frozen" state of the logical volumes, while the processes working in the original volumes continue changing the data stored there.

You also add the output of lvdisplay to your backup, so you can see the LV sizes in case you ever need to recreate and restore them.

After the backup has completed, you remove the snapshots again.

$ # create snapshots here
$ lvdisplay > lvdisplay.txt
$ borg create --read-special /path/to/repo::arch lvdisplay.txt /dev/vg0/*-snapshot
$ # remove snapshots here

Now, let's see how to restore some LVs from such a backup.

$ borg extract /path/to/repo::arch lvdisplay.txt
$ # create empty LVs with correct sizes here (look into lvdisplay.txt).
$ # we assume that you created an empty root and home LV and overwrite it now:
$ borg extract --stdout /path/to/repo::arch dev/vg0/root-snapshot > /dev/vg0/root
$ borg extract --stdout /path/to/repo::arch dev/vg0/home-snapshot > /dev/vg0/home

Separate compaction

Borg does not auto-compact the segment files in the repository at commit time (at the end of each repository-writing command) any more.

This is new since borg 1.2.0 and requires borg >= 1.2.0 on client and server.

This causes a similar behaviour of the repository as if it was in append-only mode (see below) most of the time (until borg compact is invoked or an old client triggers auto-compaction).

This has some notable consequences:

  • repository space is not freed immediately when deleting / pruning archives
  • commands finish quicker
  • repository is more robust and might be easier to recover after damages (as it contains data in a more sequential manner, historic manifests, multiple commits - until you run borg compact)
  • user can choose when to run compaction (it should be done regularly, but not necessarily after each single borg command)
  • user can choose from where to invoke borg compact to do the compaction (from client or from server, it does not need a key)
  • less repo sync data traffic in case you create a copy of your repository by using a sync tool (like rsync, rclone, ...)

You can manually run compaction by invoking the borg compact command.

Append-only mode (forbid compaction)

A repository can be made "append-only", which means that Borg will never overwrite or delete committed data (append-only refers to the segment files, but borg will also reject to delete the repository completely).

If borg compact command is used on a repo in append-only mode, there will be no warning or error, but no compaction will happen.

append-only is useful for scenarios where a backup client machine backups remotely to a backup server using borg serve, since a hacked client machine cannot delete backups on the server permanently.

To activate append-only mode, set append_only to 1 in the repository config:

borg config /path/to/repo append_only 1

Note that you can go back-and-forth between normal and append-only operation with borg config; it's not a "one way trip."

In append-only mode Borg will create a transaction log in the transactions file, where each line is a transaction and a UTC timestamp.

In addition, borg serve can act as if a repository is in append-only mode with its option --append-only. This can be very useful for fine-tuning access control in .ssh/authorized_keys:

command="borg serve --append-only ..." ssh-rsa <key used for not-always-trustable backup clients>
command="borg serve ..." ssh-rsa <key used for backup management>

Running borg init via a borg serve --append-only server will not create an append-only repository. Running borg init --append-only creates an append-only repository regardless of server settings.


Suppose an attacker remotely deleted all backups, but your repository was in append-only mode. A transaction log in this situation might look like this:

transaction 1, UTC time 2016-03-31T15:53:27.383532
transaction 5, UTC time 2016-03-31T15:53:52.588922
transaction 11, UTC time 2016-03-31T15:54:23.887256
transaction 12, UTC time 2016-03-31T15:55:54.022540
transaction 13, UTC time 2016-03-31T15:55:55.472564

From your security logs you conclude the attacker gained access at 15:54:00 and all the backups where deleted or replaced by compromised backups. From the log you know that transactions 11 and later are compromised. Note that the transaction ID is the name of the last file in the transaction. For example, transaction 11 spans files 6 to 11.

In a real attack you'll likely want to keep the compromised repository intact to analyze what the attacker tried to achieve. It's also a good idea to make this copy just in case something goes wrong during the recovery. Since recovery is done by deleting some files, a hard link copy (cp -al) is sufficient.

The first step to reset the repository to transaction 5, the last uncompromised transaction, is to remove the hints.N, index.N and integrity.N files in the repository (these files are always expendable). In this example N is 13.

Then remove or move all segment files from the segment directories in data/ starting with file 6:

rm data/**/{6..13}

That's all to do in the repository.

If you want to access this rolled back repository from a client that already has a cache for this repository, the cache will reflect a newer repository state than what you actually have in the repository now, after the rollback.

Thus, you need to clear the cache:

borg delete --cache-only repo

The cache will get rebuilt automatically. Depending on repo size and archive count, it may take a while.

You also will need to remove ~/.config/borg/security/REPOID/manifest-timestamp.


As data is only appended, and nothing removed, commands like prune or delete won't free disk space, they merely tag data as deleted in a new transaction.

Be aware that as soon as you write to the repo in non-append-only mode (e.g. prune, delete or create archives from an admin machine), it will remove the deleted objects permanently (including the ones that were already marked as deleted, but not removed, in append-only mode). Automated edits to the repository (such as a cron job running borg prune) will render append-only mode moot if data is deleted.

Even if an archive appears to be available, it is possible an attacker could delete just a few chunks from an archive and silently corrupt its data. While in append-only mode, this is reversible, but borg check should be run before a writing/pruning operation on an append-only repository to catch accidental or malicious corruption:

# run without append-only mode
borg check --verify-data repo && borg compact repo

Aside from checking repository & archive integrity you may want to also manually check backups to ensure their content seems correct.

Further considerations

Append-only mode is not respected by tools other than Borg. rm still works on the repository. Make sure that backup client machines only get to access the repository via borg serve.

Ensure that no remote access is possible if the repository is temporarily set to normal mode for e.g. regular pruning.

Further protections can be implemented, but are outside of Borg's scope. For example, file system snapshots or wrapping borg serve to set special permissions or ACLs on new data files.

SSH batch mode

When running Borg using an automated script, ssh might still ask for a password, even if there is an SSH key for the target server. Use this to make scripts more robust:

export BORG_RSH='ssh -oBatchMode=yes'


The Borg Collective (see AUTHORS file)


2024-07-04 1.4.0 Borg - Deduplicating Archiver