innobackupex man page

innobackupex — innobackupex Documentation

The innobackupex tool is a Perl script that acts as a wrapper for the xtrabackup C program. It is a patched version of the innobackup Perl script that Oracle distributes with the InnoDB Hot Backup tool. It enables more functionality by integrating xtrabackup and other functions such as file copying and streaming, and adds some convenience. It lets you perform point-in-time backups of InnoDB / XtraDB tables together with the schema definitions, MyISAM tables, and other portions of the server.

This manual section explains how to use innobackupex in detail.

Prerequisites

Connection and Privileges Needed

Percona XtraBackup needs to be able to connect to the database server and perform operations on the server and the datadir when creating a backup, when preparing in some scenarios and when restoring it. In order to do so, there are privileges and permission requirements on its execution that must be fulfilled.

Privileges refers to the operations that a system user is permitted to do in the database server. They are set at the database server and only apply to users in the database server.

Permissions are those which permits a user to perform operations on the system, like reading, writing or executing on a certain directory or start/stop a system service. They are set at a system level and only apply to system users.

Whether xtrabackup or innobackupex is used, there are two actors involved: the user invoking the program - a system user - and the user performing action in the database server - a database user. Note that these are different users in different places, even tough they may have the same username.

All the invocations of innobackupex and xtrabackup in this documentation assumes that the system user has the appropriate permissions and you are providing the relevant options for connecting the database server - besides the options for the action to be performed - and the database user has adequate privileges.

Connecting to the server

The database user used to connect to the server and its password are specified by the --user and --password option,

$ innobackupex --user=DBUSER --password=SECRET /path/to/backup/dir/
$ innobackupex --user=LUKE  --password=US3TH3F0RC3 --stream=tar ./ | bzip2 -
$ xtrabackup --user=DVADER --password=14MY0URF4TH3R --backup --target-dir=/data/bkps/

If you don't use the --user option, Percona XtraBackup will assume the database user whose name is the system user executing it.

Other Connection Options

According to your system, you may need to specify one or more of the following options to connect to the server:

OptionDescription
--portThe port to use when connecting to the database server with TCP/IP.
--socketThe socket to use when connecting to the local database.
--hostThe host to use when connecting to the database server with TCP/IP.

These options are passed to the mysql child process without alteration, see mysql --help for details.

NOTE:

In case of multiple server instances the correct connection parameters (port, socket, host) must be specified in order for innobackupex to talk to the correct server.

Permissions and Privileges Needed

Once connected to the server, in order to perform a backup you will need READ, WRITE and EXECUTE permissions at a filesystem level in the server's datadir.

The database user needs the following privileges on the tables / databases to be backed up:

·
RELOAD and LOCK TABLES (unless the --no-lock option is specified) in order to FLUSH TABLES WITH READ LOCK and FLUSH ENGINE LOGS prior to start copying the files, and LOCK TABLES FOR BACKUP and LOCK BINLOG FOR BACKUP require this privilege when Backup Locks are used,
·
REPLICATION CLIENT in order to obtain the binary log position,
·
CREATE TABLESPACE in order to import tables (see imp_exp_ibk),
·
PROCESS in order to see all threads which are running on the server (see improved_ftwrl),
·
SUPER in order to start/stop the slave threads in a replication environment, use XtraDB Changed Page Tracking for xb_incremental and for improved_ftwrl,
·
CREATE privilege in order to create the PERCONA_SCHEMA.xtrabackup_history database and table,
·
INSERT privilege in order to add history records to the PERCONA_SCHEMA.xtrabackup_history table,
·
SELECT privilege in order to use innobackupex --incremental-history-name or innobackupex --incremental-history-uuid in order for the feature to look up the innodb_to_lsn values in the PERCONA_SCHEMA.xtrabackup_history table.

The explanation of when these are used can be found in how_ibk_works.

An SQL example of creating a database user with the minimum privileges required to full backups would be:

mysql> CREATE USER 'bkpuser'@'localhost' IDENTIFIED BY 's3cret';
mysql> GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'bkpuser'@'localhost';
mysql> FLUSH PRIVILEGES;

The Backup Cycle - Full Backups

Creating a Backup with innobackupex

innobackupex is the tool which provides functionality to backup a whole MySQL database instance using the xtrabackup in combination with tools like xbstream and xbcrypt.

To create a full backup, invoke the script with the options needed to connect to the server and only one argument: the path to the directory where the backup will be stored

$ innobackupex --user=DBUSER --password=DBUSERPASS /path/to/BACKUP-DIR/

and check the last line of the output for a confirmation message:

innobackupex: Backup created in directory '/path/to/BACKUP-DIR/2013-03-25_00-00-09'
innobackupex: MySQL binlog position: filename 'mysql-bin.000003', position 1946
111225 00:00:53  innobackupex: completed OK!

The backup will be stored within a time stamped directory created in the provided path, /path/to/BACKUP-DIR/2013-03-25_00-00-09 in this particular example.

Under the hood

innobackupex called xtrabackup binary to backup all the data of InnoDB tables (see ../xtrabackup_bin/creating_a_backup for details on this process) and copied all the table definitions in the database (.frm files), data and files related to MyISAM, MERGE (reference to other tables), CSV and ARCHIVE tables, along with triggers and database configuration information to a time stamped directory created in the provided path.

It will also create the following files for convenience on the created directory.

Other options to consider

The --no-timestamp option

This option tells innobackupex not to create a time stamped directory to store the backup:

$ innobackupex --user=DBUSER --password=DBUSERPASS /path/to/BACKUP-DIR/ --no-timestamp

innobackupex will create the BACKUP-DIR subdirectory (or fail if exists) and store the backup inside of it.

The --defaults-file option

You can provide other configuration file to innobackupex with this option. The only limitation is that it has to be the first option passed:

$ innobackupex --defaults-file=/tmp/other-my.cnf --user=DBUSER --password=DBUSERPASS /path/to/BACKUP-DIR/

Preparing a Full Backup with innobackupex

After creating a backup, the data is not ready to be restored. There might be uncommitted transactions to be undone or transactions in the logs to be replayed. Doing those pending operations will make the data files consistent and it is the purpose of the prepare stage. Once this has been done, the data is ready to be used.

To prepare a backup with innobackupex you have to use the --apply-log and the path to the backup directory as an argument:

$ innobackupex --apply-log /path/to/BACKUP-DIR

and check the last line of the output for a confirmation on the process:

111225  1:01:57  InnoDB: Shutdown completed; log sequence number 1609228
111225 01:01:57  innobackupex: completed OK!

If it succeeded, innobackupex performed all operations needed, leaving the data ready to use immediately.

Under the hood

reading the configuration from the files in the backup directory,

innobackupex replayed the committed transactions in the log files (some transactions could have been done while the backup was being done) and rolled back the uncommitted ones. Once this is done, all the information lay in the tablespace (the InnoDB files), and the log files are re-created.

This implies calling xtrabackup --prepare twice with the right binary (determined through the xtrabackup_binary or by connecting the server). More details of this process are shown in the xtrabackup section.

Note that this preparation is not suited for incremental backups. If you perform it on the base of an incremental backup, you will not be able to "add" the increments. See incremental_backups_innobackupex.

Other options to consider

The --use-memory option

The preparing process can be speed up by using more memory in it. It depends on the free or available RAM on your system, it defaults to 100MB. In general, the more memory available to the process, the better. The amount of memory used in the process can be specified by multiples of bytes:

$ innobackupex --apply-log --use-memory=4G /path/to/BACKUP-DIR

Restoring a Full Backup with innobackupex

For convenience, innobackupex has a --copy-back option, which performs the restoration of a backup to the server's datadir

$ innobackupex --copy-back /path/to/BACKUP-DIR

It will copy all the data-related files back to the server's datadir, determined by the server's my.cnf configuration file. You should check the last line of the output for a success message:

innobackupex: Finished copying back files.

111225 01:08:13  innobackupex: completed OK!

NOTE:

The datadir must be empty; Percona XtraBackup innobackupex --copy-back option will not copy over existing files. Also it's important to note that MySQL server needs to be shut down before restore is performed. You can't restore to a datadir of a running mysqld instance (except when importing a partial backup).

As files' attributes will be preserved, in most cases you will need to change the files’ ownership to mysql before starting the database server, as they will be owned by the user who created the backup:

$ chown -R mysql:mysql /var/lib/mysql

Also note that all of these operations will be done as the user calling innobackupex, you will need write permissions on the server's datadir.

Other Types of Backups

Incremental Backups with innobackupex

As not all information changes between each backup, the incremental backup strategy uses this to reduce the storage needs and the duration of making a backup.

This can be done because each InnoDB page has a log sequence number, LSN, which acts as a version number of the entire database. Every time the database is modified, this number gets incremented.

An incremental backup copies all pages since a specific LSN.

Once the pages have been put together in their respective order, applying the logs will recreate the process that affected the database, yielding the data at the moment of the most recently created backup.

Creating an Incremental Backups with innobackupex

First, you need to make a full backup as the BASE for subsequent incremental backups:

$ innobackupex /data/backups

This will create a timestamped directory in /data/backups. Assuming that the backup is done last day of the month, BASEDIR would be /data/backups/2013-03-31_23-01-18, for example.

NOTE:

You can use the innobackupex --no-timestamp option to override this behavior and the backup will be created in the given directory.

If you check at the xtrabackup-checkpoints file in BASE-DIR, you should see something like:

backup_type = full-backuped
from_lsn = 0
to_lsn = 1291135

To create an incremental backup the next day, use the --incremental option and provide the BASEDIR:

$ innobackupex --incremental /data/backups --incremental-basedir=BASEDIR

and another timestamped directory will be created in /data/backups, in this example, /data/backups/2013-04-01_23-01-18 containing the incremental backup. We will call this INCREMENTAL-DIR-1.

If you check at the xtrabackup-checkpoints file in INCREMENTAL-DIR-1, you should see something like:

backup_type = incremental
from_lsn = 1291135
to_lsn = 1352113

Creating another incremental backup the next day will be analogous, but this time the previous incremental one will be base:

$ innobackupex --incremental /data/backups --incremental-basedir=INCREMENTAL-DIR-1

yielding (in this example) /data/backups/2013-04-02_23-01-18. We will use INCREMENTAL-DIR-2 instead for simplicity.

At this point, the xtrabackup-checkpoints file in INCREMENTAL-DIR-2 should contain something like:

backup_type = incremental
from_lsn = 1352113
to_lsn = 1358967

As it was said before, an incremental backup only copy pages with a LSN greater than a specific value. Providing the LSN would have produced directories with the same data inside:

innobackupex --incremental /data/backups --incremental-lsn=1291135
innobackupex --incremental /data/backups --incremental-lsn=1358967

This is a very useful way of doing an incremental backup, since not always the base or the last incremental will be available in the system.

WARNING:

This procedure only affects XtraDB or InnoDB-based tables. Other tables with a different storage engine, e.g. MyISAM, will be copied entirely each time an incremental backup is performed.

Preparing an Incremental Backup with innobackupex

Preparing incremental backups is a bit different than full ones. This is, perhaps, the stage where more attention is needed:

·
First, only the committed transactions must be replayed on each backup. This will merge the base full backup with the incremental ones.
·
Then, the uncommitted transaction must be rolled back in order to have a ready-to-use backup.

If you replay the committed transactions and rollback the uncommitted ones on the base backup, you will not be able to add the incremental ones. If you do this on an incremental one, you won't be able to add data from that moment and the remaining increments.

Having this in mind, the procedure is very straight-forward using the --redo-only option, starting with the base backup:

innobackupex --apply-log --redo-only BASE-DIR

You should see an output similar to:

120103 22:00:12 InnoDB: Shutdown completed; log sequence number 1291135
120103 22:00:12 innobackupex: completed OK!

Then, the first incremental backup can be applied to the base backup, by issuing:

innobackupex --apply-log --redo-only BASE-DIR --incremental-dir=INCREMENTAL-DIR-1

You should see an output similar to the previous one but with corresponding LSN:

120103 22:08:43 InnoDB: Shutdown completed; log sequence number 1358967
120103 22:08:43 innobackupex: completed OK!

If no --incremental-dir is set, innobackupex will use the most recent subdirectory created in the basedir.

At this moment, BASE-DIR contains the data up to the moment of the first incremental backup. Note that the full data will always be in the directory of the base backup, as we are appending the increments to it.

Repeat the procedure with the second one:

innobackupex --apply-log BASE-DIR --incremental-dir=INCREMENTAL-DIR-2

If the "completed OK!" message was shown, the final data will be in the base backup directory, BASE-DIR.

NOTE:

--redo-only should be used when merging all incrementals except the last one. That's why the previous line doesn't contain the --redo-only option. Even if the --redo-only was used on the last step, backup would still be consistent but in that case server would perform the rollback phase.

You can use this procedure to add more increments to the base, as long as you do it in the chronological order that the backups were done. If you merge the incrementals in the wrong order, the backup will be useless. If you have doubts about the order that they must be applied, you can check the file xtrabackup_checkpoints at the directory of each one, as shown in the beginning of this section.

Once you merge the base with all the increments, you can prepare it to roll back the uncommitted transactions:

innobackupex --apply-log BASE-DIR

Now your backup is ready to be used immediately after restoring it. This preparation step is optional. However, if you restore without doing the prepare, the database server will begin to rollback uncommitted transactions, the same work it would do if a crash had occurred. This results in delay as the database server starts, and you can avoid the delay if you do the prepare.

Note that the iblog* files will not be created by innobackupex, if you want them to be created, use xtrabackup --prepare on the directory. Otherwise, the files will be created by the server once started.

Restoring Incremental Backups with innobackupex

After preparing the incremental backups, the base directory contains the same as a full one. For restoring it you can use:

innobackupex --copy-back BASE-DIR

and you may have to change the ownership as detailed on restoring_a_backup_ibk.

Incremental Streaming Backups using xbstream and tar

Incremental streaming backups can be performed with the xbstream streaming option. Currently backups are packed in custom xbstream format. With this feature taking a BASE backup is needed as well.

Taking a base backup:

innobackupex /data/backups

Taking a local backup:

innobackupex --incremental --incremental-lsn=LSN-number --stream=xbstream ./ > incremental.xbstream

Unpacking the backup:

xbstream -x < incremental.xbstream

Taking a local backup and streaming it to the remote server and unpacking it:

innobackupex  --incremental --incremental-lsn=LSN-number --stream=xbstream ./ | /
ssh user@hostname " cat - | xbstream -x -C > /backup-dir/"

Partial Backups

Percona XtraBackup features partial backups, which means that you may backup only some specific tables or databases. The tables you back up must be in separate tablespaces, as a result of being created or altered after you enabled the innodb_file_per_table option on the server.

There is only one caveat about partial backups: do not copy back the prepared backup. Restoring partial backups should be done by importing the tables, not by using the traditional --copy-back option. Although there are some scenarios where restoring can be done by copying back the files, this may be lead to database inconsistencies in many cases and it is not the recommended way to do it.

Creating Partial Backups

There are three ways of specifying which part of the whole data will be backed up: regular expressions (--include), enumerating the tables in a file (--tables-file) or providing a list of databases (--databases).

Using the --include option

The regular expression provided to this will be matched against the fully qualified table name, including the database name, in the form databasename.tablename.

For example,

$ innobackupex --include='^mydatabase[.]mytable' /path/to/backup

The command above will create a timestamped directory with the usual files that innobackupex creates, but only the data files related to the tables matched.

Note that this option is passed to xtrabackup --tables and is matched against each table of each database, the directories of each database will be created even if they are empty.

Using the --tables-file option

The text file provided (the path) to this option can contain multiple table names, one per line, in the databasename.tablename format.

For example,

$ echo "mydatabase.mytable" > /tmp/tables.txt
$ innobackupex --tables-file=/tmp/tables.txt /path/to/backup

The command above will create a timestamped directory with the usual files that innobackupex creates, but only containing the data-files related to the tables specified in the file.

This option is passed to xtrabackup --tables-file and, unlike the --tables option, only directories of databases of the selected tables will be created.

Using the --databases option

This option accepts either a space-separated list of the databases and tables to backup - in the databasename[.tablename] form - or a file containing the list at one element per line.

For example,

$ innobackupex --databases="mydatabase.mytable mysql" /path/to/backup

The command above will create a timestamped directory with the usual files that innobackupex creates, but only containing the data-files related to mytable in the mydatabase directory and the mysql directory with the entire mysql database.

Preparing Partial Backups

For preparing partial backups, the procedure is analogous to restoring individual tables : apply the logs and use the --export option:

$ innobackupex --apply-log --export /path/to/partial/backup

You may see warnings in the output about tables that don't exist. This is because InnoDB -based engines stores its data dictionary inside the tablespace files besides the .frm files. innobackupex will use xtrabackup to remove the missing tables (those who weren't selected in the partial backup) from the data dictionary in order to avoid future warnings or errors:

111225  0:54:06  InnoDB: Error: table 'mydatabase/mytablenotincludedinpartialb'
InnoDB: in InnoDB data dictionary has tablespace id 6,
InnoDB: but tablespace with that id or name does not exist. It will be removed from data dictionary.

You should also see the notification of the creation of a file needed for importing (.exp file) for each table included in the partial backup:

xtrabackup: export option is specified.
xtrabackup: export metadata of table 'employees/departments' to file `.//departments.exp` (2 indexes)
xtrabackup:     name=PRIMARY, id.low=80, page=3
xtrabackup:     name=dept_name, id.low=81, page=4

Note that you can use the --export option with --apply-log to an already-prepared backup in order to create the .exp files.

Finally, check the for the confirmation message in the output:

111225 00:54:18  innobackupex: completed OK!

Restoring Partial Backups

Restoring should be done by restoring individual tables in the partial backup to the server.

It can also be done by copying back the prepared backup to a "clean" datadir (in that case, make sure to include the mysql database). System database can be created with:

$ sudo mysql_install_db --user=mysql

Compact Backups

When doing the backup of InnoDB tables it's possible to omit the secondary index pages. This will make the backups more compact and this way they will take less space on disk. The downside of this is that the backup prepare process takes longer as those secondary indexes need to be recreated. Difference in backup size depends on the size of the secondary indexes.

For example full backup taken without and with the --compact option:

#backup size without --compact
2.0G  2013-02-01_10-18-38

#backup size taken with --compact option
1.4G  2013-02-01_10-29-48

NOTE:

Compact backups are not supported for system table space, so in order to work correctly innodb-file-per-table option should be enabled.

This feature was introduced in Percona XtraBackup 2.1.

Creating Compact Backups

To make a compact backup innobackupex needs to be started with the --compact option:

$ innobackupex --compact /data/backups

This will create a timestamped directory in /data/backups.

NOTE:

You can use the innobackupex --no-timestamp option to override this behavior and the backup will be created in the given directory.

If you check at the xtrabackup_checkpoints file in BASE-DIR, you should see something like:

backup_type = full-backuped
from_lsn = 0
to_lsn = 2888984349
last_lsn = 2888984349
compact = 1

When --compact wasn't used compact value will be 0. This way it's easy to check if the backup contains the secondary index pages or not.

Preparing Compact Backups

Preparing the compact require rebuilding the indexes as well. In order to prepare the backup a new option --rebuild-indexes should be used with --apply-logs:

$ innobackupex --apply-log --rebuild-indexes /data/backups/2013-02-01_10-29-48

Output, beside the standard innobackupex output, should contain the information about indexes being rebuilt, like:

130201 10:40:20  InnoDB: Waiting for the background threads to start
Rebuilding indexes for table sbtest/sbtest1 (space id: 10)
  Found index k_1
  Dropping 1 index(es).
  Rebuilding 1 index(es).
Rebuilding indexes for table sbtest/sbtest2 (space id: 11)
  Found index k_1
  Found index c
  Found index k
  Found index c_2
  Dropping 4 index(es).
  Rebuilding 4 index(es).

Since Percona XtraBackup has no information when applying an incremental backup to a compact full one, on whether there will be more incremental backups applied to it later or not, rebuilding indexes needs to be explicitly requested by a user whenever a full backup with some incremental backups merged is ready to be restored. Rebuilding indexes unconditionally on every incremental backup merge is not an option, since it is an expensive operation.

NOTE:

To process individual tables in parallel when rebuilding indexes, innobackupex --rebuild-threads option can be used to specify the number of threads started by Percona XtraBackup when rebuilding secondary indexes on --apply-log --rebuild-indexes. Each thread rebuilds indexes for a single .ibd tablespace at a time.

Restoring Compact Backups

innobackupex has a --copy-back option, which performs the restoration of a backup to the server's datadir

$ innobackupex --copy-back /path/to/BACKUP-DIR

It will copy all the data-related files back to the server's datadir, determined by the server's my.cnf configuration file. You should check the last line of the output for a success message:

innobackupex: Finished copying back files.
130201 11:08:13  innobackupex: completed OK!

Other Reading

·
Feature preview: Compact backups in Percona XtraBackup

Encrypted Backups

Percona XtraBackup has implemented support for encrypted backups. This feature was introduced in Percona XtraBackup 2.1. It can be used to encrypt/decrypt local or streaming backup with xbstream option (streaming tar backups are not supported) in order to add another layer of protection to the backups. Encryption is done with the libgcrypt library.

NOTE:

Encryption related options are currently ignored by innobackupex when specified in my.cnf.

Creating Encrypted Backups

To make an encrypted backup following options need to be specified (options --encrypt-key and --encrypt-key-file are mutually exclusive, i.e. just one of them needs to be provided):

·
--encryption=ALGORITHM - currently supported algorithms are: AES128, AES192 and AES256
·
--encrypt-key=ENCRYPTION_KEY - proper length encryption key to use. It is not recommended to use this option where there is uncontrolled access to the machine as the command line and thus the key can be viewed as part of the process info.
·
--encrypt-key-file=KEYFILE - the name of a file where the raw key of the appropriate length can be read from. The file must be a simple binary (or text) file that contains exactly the key to be used.

Both --encrypt-key option and --encrypt-key-file option can be used to specify the encryption key. Encryption key can be generated with command like:

$ openssl enc -aes-256-cbc -pass pass:Password -P -md sha1

Output of that command should look like this:

salt=9464A264486EEC69
key=DDD3A1B6BC90B9A9B631913CF30E0336A2571BA854E2D65CF92A6D0BDBCBB251
iv =A1EDC73815467C083B0869508406637E

In this case we can use iv value as key.

Using the --encrypt-key option

Example of the innobackupex command using the --encrypt-key should look like this

$ innobackupex --encrypt=AES256 --encrypt-key="A1EDC73815467C083B0869508406637E" /data/backups

Using the --encrypt-key-file option

Example of the innobackupex command using the --encrypt-key-file should look like this

$ innobackupex --encrypt=AES256 --encrypt-key-file=/data/backups/keyfile /data/backups

NOTE:

Depending on the text editor used for making the KEYFILE, text file in some cases can contain the CRLF and this will cause the key size to grow and thus making it invalid. Suggested way to do this would be to create the file with: echo -n "A1EDC73815467C083B0869508406637E" > /data/backups/keyfile

Both of these examples will create a timestamped directory in /data/backups containing the encrypted backup.

NOTE:

You can use the innobackupex --no-timestamp option to override this behavior and the backup will be created in the given directory.

Optimizing the encryption process

Two new options have been introduced with the encrypted backups that can be used to speed up the encryption process. These are --encrypt-threads and --encrypt-chunk-size. By using the --encrypt-threads option multiple threads can be specified to be used for encryption in parallel. Option --encrypt-chunk-size can be used to specify the size (in bytes) of the working encryption buffer for each encryption thread (default is 64K).

Decrypting Encrypted Backups

Backups can be decrypted with xbcrypt. Following one-liner can be used to encrypt the whole folder:

$ for i in `find . -iname "*\.xbcrypt"`; do xbcrypt -d --encrypt-key-file=/root/secret_key --encrypt-algo=AES256 < $i > $(dirname $i)/$(basename $i .xbcrypt) && rm $i; done

In Percona XtraBackup 2.1.4 new innobackupex --decrypt option has been implemented that can be used to decrypt the backups:

$ innobackupex --decrypt=AES256 --encrypt-key="A1EDC73815467C083B0869508406637E" /data/backups/2013-08-01_08-31-35/

Use of the innobackupex --decrypt will remove the original encrypted files and leave the results in the same location.

NOTE:

innobackupex --parallel can be used with innobackupex --decrypt option to decrypt multiple files simultaneously.

When the files have been decrypted backup can be prepared.

Preparing Encrypted Backups

After the backups have been decrypted, they can be prepared the same way as the standard full backups with the --apply-logs option:

$ innobackupex --apply-log /data/backups/2013-08-01_08-31-35/

Restoring Encrypted Backups

innobackupex has a --copy-back option, which performs the restoration of a backup to the server's datadir

$ innobackupex --copy-back /path/to/BACKUP-DIR

It will copy all the data-related files back to the server's datadir, determined by the server's my.cnf configuration file. You should check the last line of the output for a success message:

innobackupex: Finished copying back files.
130801 11:08:13  innobackupex: completed OK!

Other Reading

·
The Libgcrypt Reference Manual

Advanced Features

Streaming and Compressing Backups

Streaming mode, supported by Percona XtraBackup, sends backup to STDOUT in special tar or xbstream format instead of copying files to the backup directory.

This allows you to use other programs to filter the output of the backup, providing greater flexibility for storage of the backup. For example, compression is achieved by piping the output to a compression utility. One of the benefits of streaming backups and using Unix pipes is that the backups can be automatically encrypted.

To use the streaming feature, you must use the --stream, providing the format of the stream (tar or xbstream ) and where to store the temporary files:

$ innobackupex --stream=tar /tmp

innobackupex starts xtrabackup in --log-stream mode in a child process, and redirects its log to a temporary file. It then uses xbstream to stream all of the data files to STDOUT, in a special xbstream format. See ../xbstream/xbstream for details. After it finishes streaming all of the data files to STDOUT, it stops xtrabackup and streams the saved log file too.

When compression is enabled, xtrabackup compresses all output data, except the meta and non-InnoDB files which are not compressed, using the specified compression algorithm. The only currently supported algorithm is quicklz. The resulting files have the qpress archive format, i.e. every *.qp file produced by xtrabackup is essentially a one-file qpress archive and can be extracted and uncompressed by the qpress file archiver which is available from Percona Software repositories.

Using xbstream as a stream option, backups can be copied and compressed in parallel which can significantly speed up the backup process. In case backups were both compressed and encrypted, they'll need to decrypted first in order to be uncompressed.

Examples using xbstream

Store the complete backup directly to a single file:

$ innobackupex --stream=xbstream /root/backup/ > /root/backup/backup.xbstream

To stream and compress the backup:

$ innobackupex --stream=xbstream --compress /root/backup/ > /root/backup/backup.xbstream

To unpack the backup to the /root/backup/ directory:

$ xbstream -x <  backup.xbstream -C /root/backup/

To send the compressed backup to another host and unpack it:

$ innobackupex --compress --stream=xbstream /root/backup/ | ssh user@otherhost "xbstream -x -C /root/backup/"

Examples using tar

Store the complete backup directly to a tar archive:

$ innobackupex --stream=tar /root/backup/ > /root/backup/out.tar

To send the tar archive to another host:

$ innobackupex --stream=tar ./ | ssh user@destination \ "cat - > /data/backups/backup.tar"

WARNING:

To extract Percona XtraBackup's archive you must use tar with -i option:

$ tar -xizf backup.tar.gz

Compress with your preferred compression tool:

$ innobackupex --stream=tar ./ | gzip - > backup.tar.gz
$ innobackupex --stream=tar ./ | bzip2 - > backup.tar.bz2

Note that the streamed backup will need to be prepared before restoration. Streaming mode does not prepare the backup.

Taking Backups in Replication Environments

There are options specific to back up from a replication slave.

The --slave-info option

This option is useful when backing up a replication slave server. It prints the binary log position and name of the master server. It also writes this information to the xtrabackup_slave_info file as a CHANGE MASTER statement.

This is useful for setting up a new slave for this master can be set up by starting a slave server on this backup and issuing the statement saved in the xtrabackup_slave_info file. More details of this procedure can be found in replication_howto.

The --safe-slave-backup option

In order to assure a consistent replication state, this option stops the slave SQL thread and wait to start backing up until Slave_open_temp_tables in SHOW STATUS is zero. If there are no open temporary tables, the backup will take place, otherwise the SQL thread will be started and stopped until there are no open temporary tables. The backup will fail if Slave_open_temp_tables does not become zero after --safe-slave-backup-timeout seconds (defaults to 300 seconds). The slave SQL thread will be restarted when the backup finishes.

Using this option is always recommended when taking backups from a slave server.

WARNING:

Make sure your slave is a true replica of the master before using it as a source for backup. A good tool to validate a slave is pt-table-checksum.

Accelerating the backup process

Accelerating with --parallel copy and --compress-threads

When performing a local backup or the streaming backup with xbstream option, multiple files can be copied concurrently by using the --parallel option. This option specifies the number of threads created by xtrabackup to copy data files.

To take advantage of this option either the multiple tablespaces option must be enabled (innodb_file_per_table) or the shared tablespace must be stored in multiple ibdata files with the innodb_data_file_path option. Having multiple files for the database (or splitting one into many) doesn't have a measurable impact on performance.

As this feature is implemented at a file level, concurrent file transfer can sometimes increase I/O throughput when doing a backup on highly fragmented data files, due to the overlap of a greater number of random read requests. You should consider tuning the filesystem also to obtain the maximum performance (e.g. checking fragmentation).

If the data is stored on a single file, this option will have no effect.

To use this feature, simply add the option to a local backup, for example:

$ innobackupex --parallel=4 /path/to/backup

By using the xbstream in streaming backups you can additionally speed up the compression process by using the --compress-threads option. This option specifies the number of threads created by xtrabackup for for parallel data compression. The default value for this option is 1.

To use this feature, simply add the option to a local backup, for example

$ innobackupex --stream=xbstream --compress --compress-threads=4 ./ > backup.xbstream

Before applying logs, compressed files will need to be uncompressed.

Accelerating with --rsync option

In order to speed up the backup process and to minimize the time FLUSH TABLES WITH READ LOCK is blocking the writes, option innobackupex --rsync should be used. When this option is specified, innobackupex uses rsync to copy all non-InnoDB files instead of spawning a separate cp for each file, which can be much faster for servers with a large number of databases or tables. innobackupex will call the rsync twice, once before the FLUSH TABLES WITH READ LOCK and once during to minimize the time the read lock is being held. During the second rsync call, it will only synchronize the changes to non-transactional data (if any) since the first call performed before the FLUSH TABLES WITH READ LOCK.

NOTE:

This option cannot be used together with innobackupex --remote-host or innobackupex --stream options.

Throttling backups with innobackupex

Although innobackupex does not block your database's operation, any backup can add load to the system being backed up. On systems that do not have much spare I/O capacity, it might be helpful to throttle the rate at which innobackupex reads and writes InnoDB data. You can do this with the --throttle option.

This option is passed directly to xtrabackup binary and only limits the operations on the logs and files of InnoDB tables. It doesn't have an effect on reading or writing files from tables with other storage engine.

One way of checking the current I/O operations at a system is with iostat command. See throttling_backups_xbk for details of how throttling works.

NOTE:

innobackupex --throttle option works only during the backup phase, ie. it will not work with innobackupex --apply-log and innobackupex --copy-back options.

The --throttle option is similar to the --sleep option in mysqlbackup and should be used instead of it, as --sleep will be ignored.

Restoring Individual Tables

In server versions prior to 5.6, it is not possible to copy tables between servers by copying the files, even with innodb_file_per_table. However, with the Percona XtraBackup, you can export individual tables from any InnoDB database, and import them into Percona Server with XtraDB or MySQL 5.6 (The source doesn't have to be XtraDB or or MySQL 5.6, but the destination does). This only works on individual .ibd files, and cannot export a table that is not contained in its own .ibd file.

NOTE:

If you're running Percona Server version older than 5.5.10-20.1, variable innodb_expand_import should be used instead of innodb_import_table_from_xtrabackup.

Exporting tables

Exporting is done in the preparation stage, not at the moment of creating the backup. Once a full backup is created, prepare it with the --export option:

$ innobackupex --apply-log --export /path/to/backup

This will create for each InnoDB with its own tablespace a file with .exp extension. An output of this procedure would contain:

..
xtrabackup: export option is specified.
xtrabackup: export metadata of table 'mydatabase/mytable' to file
`./mydatabase/mytable.exp` (1 indexes)
..

Now you should see a .exp file in the target directory:

$ find /data/backups/mysql/ -name export_test.*
/data/backups/mysql/test/export_test.exp
/data/backups/mysql/test/export_test.ibd
/data/backups/mysql/test/export_test.cfg

These three files are all you need to import the table into a server running Percona Server with XtraDB or MySQL 5.6.

NOTE:

MySQL uses .cfg file which contains InnoDB dictionary dump in special format. This format is different from the .exp one which is used in XtraDB for the same purpose. Strictly speaking, a .cfg file is not required to import a tablespace to MySQL 5.6 or Percona Server 5.6. A tablespace will be imported successfully even if it is from another server, but InnoDB will do schema validation if the corresponding .cfg file is present in the same directory.

Each .exp (or .cfg) file will be used for importing that table.

NOTE:

InnoDB does a slow shutdown (i.e. full purge + change buffer merge) on --export, otherwise the tablespaces wouldn't be consistent and thus couldn't be imported. All the usual performance considerations apply: sufficient buffer pool (i.e. --use-memory, 100MB by default) and fast enough storage, otherwise it can take a prohibitive amount of time for export to complete.

Importing tables

To import a table to other server, first create a new table with the same structure as the one that will be imported at that server:

OTHERSERVER|mysql> CREATE TABLE mytable (...) ENGINE=InnoDB;

then discard its tablespace:

OTHERSERVER|mysql> ALTER TABLE mydatabase.mytable DISCARD TABLESPACE;

After this, copy mytable.ibd and mytable.exp ( or mytable.cfg if importing to MySQL 5.6) files to database's home, and import its tablespace:

OTHERSERVER|mysql> ALTER TABLE mydatabase.mytable IMPORT TABLESPACE;

Once this is executed, data in the imported table will be available.

Point-In-Time recovery

Recovering up to particular moment in database's history can be done with innobackupex and the binary logs of the server.

Note that the binary log contains the operations that modified the database from a point in the past. You need a full datadir as a base, and then you can apply a series of operations from the binary log to make the data match what it was at the point in time you want.

For taking the snapshot, we will use innobackupex for a full backup:

$ innobackupex /path/to/backup --no-timestamp

(the --no-timestamp option is for convenience in this example) and we will prepare it to be ready for restoration:

$ innobackupex --apply-log /path/to/backup

For more details on these procedures, see creating_a_backup_ibk and preparing_a_backup_ibk.

Now, suppose that time has passed, and you want to restore the database to a certain point in the past, having in mind that there is the constraint of the point where the snapshot was taken.

To find out what is the situation of binary logging in the server, execute the following queries:

mysql> SHOW BINARY LOGS;
+------------------+-----------+
| Log_name         | File_size |
+------------------+-----------+
| mysql-bin.000001 |       126 |
| mysql-bin.000002 |      1306 |
| mysql-bin.000003 |       126 |
| mysql-bin.000004 |       497 |
+------------------+-----------+

and

mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000004 |      497 |              |                  |
+------------------+----------+--------------+------------------+

The first query will tell you which files contain the binary log and the second one which file is currently being used to record changes, and the current position within it. Those files are stored usually in the datadir (unless other location is specified when the server is started with the --log-bin= option).

To find out the position of the snapshot taken, see the xtrabackup_binlog_info at the backup's directory:

$ cat /path/to/backup/xtrabackup_binlog_info
mysql-bin.000003      57

This will tell you which file was used at moment of the backup for the binary log and its position. That position will be the effective one when you restore the backup:

$ innobackupex --copy-back /path/to/backup

As the restoration will not affect the binary log files (you may need to adjust file permissions, see restoring_a_backup_ibk), the next step is extracting the queries from the binary log with mysqlbinlog starting from the position of the snapshot and redirecting it to a file

$ mysqlbinlog /path/to/datadir/mysql-bin.000003 /path/to/datadir/mysql-bin.000004 \
    --start-position=57 > mybinlog.sql

Note that if you have multiple files for the binary log, as in the example, you have to extract the queries with one process, as shown above.

Inspect the file with the queries to determine which position or date corresponds to the point-in-time wanted. Once determined, pipe it to the server. Assuming the point is 11-12-25 01:00:00:

$ mysqlbinlog /path/to/datadir/mysql-bin.000003 /path/to/datadir/mysql-bin.000004 \
    --start-position=57 --stop-datetime="11-12-25 01:00:00" | mysql -u root -p

and the database will be rolled forward up to that Point-In-Time.

Improved FLUSH TABLES WITH READ LOCK handling

When taking backups, FLUSH TABLES WITH READ LOCK is being used before the non-InnoDB files are being backed up to ensure backup is being consistent. FLUSH TABLES WITH READ LOCK can be run even though there may be a running query that has been executing for hours. In this case everything will be locked up in Waiting for table flush or Waiting for master to send event states. Killing the FLUSH TABLES WITH READ LOCK does not correct this issue either. In this case the only way to get the server operating normally again is to kill off the long running queries that blocked it to begin with. This means that if there are long running queries FLUSH TABLES WITH READ LOCK can get stuck, leaving server in read-only mode until waiting for these queries to complete.

In order to prevent this from happening two things have been implemented:

·
innobackupex can wait for a good moment to issue the global lock.
·
innobackupex can kill all or only SELECT queries which are preventing the global lock from being acquired

Waiting for queries to finish

Good moment to issue a global lock is the moment when there are no long queries running. But waiting for a good moment to issue the global lock for extended period of time isn't always good approach, as it can extend the time needed for backup to take place. To prevent innobackupex from waiting to issue FLUSH TABLES WITH READ LOCK for too long, new option has been implemented: innobackupex --lock-wait-timeout option can be used to limit the waiting time. If the good moment to issue the lock did not happen during this time, innobackupex will give up and exit with an error message and backup will not be taken. Zero value for this option turns off the feature (which is default).

Another possibility is to specify the type of query to wait on. In this case innobackupex --lock-wait-query-type. Possible values are all and update. When all is used innobackupex will wait for all long running queries (execution time longer than allowed by innobackupex --lock-wait-threshold) to finish before running the FLUSH TABLES WITH READ LOCK. When update is used innobackupex will wait on UPDATE/ALTER/REPLACE/INSERT queries to finish.

Although time needed for specific query to complete is hard to predict, we can assume that queries that are running for a long time already will likely not be completed soon, and queries which are running for a short time will likely be completed shortly. innobackupex can use the value of innobackupex --lock-wait-threshold option to specify which query is long running and will likely block global lock for a while. In order to use this option xtrabackup user should have PROCESS and SUPER privileges.

Killing the blocking queries

Second option is to kill all the queries which prevent global lock from being acquired. In this case all the queries which run longer than FLUSH TABLES WITH READ LOCK are possible blockers. Although all queries can be killed, additional time can be specified for the short running queries to complete. This can be specified by innobackupex --kill-long-queries-timeout option. This option specifies the time for queries to complete, after the value is reached, all the running queries will be killed. Default value is zero, which turns this feature off.

innobackupex --kill-long-query-type option can be used to specify all or only SELECT queries that are preventing global lock from being acquired. In order to use this option xtrabackup user should have PROCESS and SUPER privileges.

Options summary

·
--lock-wait-timeout=N (seconds) - how long to wait for a good moment. Default is 0, not to wait.
·
--lock-wait-query-type={all|update} - which long queries should be finished before FLUSH TABLES WITH READ LOCK is run. Default is all.
·
--lock-wait-threshold=N (seconds) - how long query should be running before we consider it long running and potential blocker of global lock.
·
--kill-long-queries-timeout=N (seconds) - how many time we give for queries to complete after FLUSH TABLES WITH READ LOCK is issued before start to kill. Default if 0, not to kill.
·
--kill-long-query-type={all|select} - which queries should be killed once kill-long-queries-timeout has expired.

Example

Running the innobackupex with the following options:

will cause innobackupex to spend no longer than 3 minutes waiting for all queries older than 40 seconds to complete. After FLUSH TABLES WITH READ LOCK is issued, innobackupex will wait 20 seconds for lock to be acquired. If lock is still not acquired after 20 seconds, it will kill all queries which are running longer that the FLUSH TABLES WITH READ LOCK.

Version Information

This feature has been implemented in Percona XtraBackup 2.1.4.

Store backup history on the server

Percona XtraBackup supports storing the backups history on the server. This feature was implemented in Percona XtraBackup 2.2. Storing backup history on the server was implemented to provide users with additional information about backups that are being taken. Backup history information will be stored in the PERCONA_SCHEMA.XTRABACKUP_HISTORY table.

To use this feature three new innobackupex options have been implemented:

·
innobackupex --history =<name> : This option enables the history feature and allows the user to specify a backup series name that will be placed within the history record.
·
innobackupex --incremental-history-name =<name> : This option allows an incremental backup to be made based on a specific history series by name. innobackupex will search the history table looking for the most recent (highest to_lsn) backup in the series and take the to_lsn value to use as it's starting lsn. This is mutually exclusive with innobackupex --incremental-history-uuid, innobackupex --incremental-basedir and innobackupex --incremental-lsn options. If no valid LSN can be found (no series by that name) innobackupex will return with an error.
·
innobackupex --incremental-history-uuid =<uuid> : Allows an incremental backup to be made based on a specific history record identified by UUID. innobackupex will search the history table looking for the record matching UUID and take the to_lsn value to use as it's starting LSN. This options is mutually exclusive with innobackupex --incremental-basedir, innobackupex --incremental-lsn and innobackupex --incremental-history-name options. If no valid LSN can be found (no record by that UUID or missing to_lsn), innobackupex will return with an error.

NOTE:

Backup that's currently being performed will NOT exist in the xtrabackup_history table within the resulting backup set as the record will not be added to that table until after the backup has been taken.

If you want access to backup history outside of your backup set in the case of some catastrophic event, you will need to either perform a mysqldump, partial backup or SELECT * on the history table after innobackupex completes and store the results with you backup set.

Privileges

User performing the backup will need following privileges:

·
CREATE privilege in order to create the PERCONA_SCHEMA.xtrabackup_history database and table.
·
INSERT privilege in order to add history records to the PERCONA_SCHEMA.xtrabackup_history table.
·
SELECT privilege in order to use innobackupex --incremental-history-name or innobackupex --incremental-history-uuid in order for the feature to look up the innodb_to_lsn values in the PERCONA_SCHEMA.xtrabackup_history table.

PERCONA_SCHEMA.XTRABACKUP_HISTORY table

This table contains the information about the previous server backups. Information about the backups will only be written if the backup was taken with innobackupex --history option.

Column NameDescription
uuidUnique backup id
nameUser provided name of backup series. There may be multiple entries with the same name used to identify related backups in a series.
tool_nameName of tool used to take backup
tool_commandExact command line given to the tool with --password and --encryption_key obfuscated
tool_versionVersion of tool used to take backup
ibbackup_versionVersion of the xtrabackup binary used to take backup
server_versionServer version on which backup was taken
start_timeTime at the start of the backup
end_timeTime at the end of the backup
lock_timeAmount of time, in seconds, spent calling and holding locks for FLUSH TABLES WITH READ LOCK
binlog_posBinlog file and position at end of FLUSH TABLES WITH READ LOCK
innodb_from_lsnLSN at beginning of backup which can be used to determine prior backups
innodb_to_lsnLSN at end of backup which can be used as the starting lsn for the next incremental
partialIs this a partial backup, if N that means that it's the full backup
incrementalIs this an incremental backup
formatDescription of result format (file, tar, xbstream)
compactIs this a compact backup
compressedIs this a compressed backup
encryptedIs this an encrypted backup

Limitiatons

·
innobackupex --history option must be specified only on the innobackupex command line and not within a configuration file in order to be effective.
·
innobackupex --incremental-history-name and innobackupex --incremental-history-uuid options must be specified only on the innobackupex command line and not within a configuration file in order to be effective.

Implementation

How innobackupex Works

innobackupex is a script written in Perl that wraps the xtrabackup and performs the tasks where the performance and efficiency of C program isn't needed. In this way, it provides a convenient and integrated approach to backing up in many common scenarios.

The following describes the rationale behind innobackupex actions.

Making a Backup

If no mode is specified, innobackupex will assume the backup mode.

By default, it starts xtrabackup with the --suspend-at-end option, and lets it copy the InnoDB data files. When xtrabackup finishes that, innobackupex sees it create the xtrabackup_suspended_2 file and executes FLUSH TABLES WITH READ LOCK. Then it begins copying the rest of the files.

If the --ibbackup is not supplied, innobackupex will try to detect it: if the xtrabackup_binary file exists on the backup directory, it reads from it which binary of xtrabackup will be used. Otherwise, it will try to connect to the database server in order to determine it. If the connection can't be established, xtrabackup will fail and you must specify it (see ibk-right-binary).

When the binary is determined, the connection to the database server is checked. This is done by connecting, issuing a query, and closing the connection. If everything goes well, the binary is started as a child process.

If it is not an incremental backup, it connects to the server. It waits for slaves in a replication setup if the option --safe-slave-backup is set and will flush all tables with READ LOCK, preventing all MyISAM tables from writing (unless option --no-lock is specified).

NOTE:

Locking is done only for MyISAM and other non-InnoDB tables, and only after Percona XtraBackup is finished backing up all InnoDB/XtraDB data and logs.

Once this is done, the backup of the files will begin. It will backup .frm, .MRG, .MYD, .MYI, .TRG, .TRN, .ARM, .ARZ, .CSM, .CSV, .par, and .opt files.

When all the files are backed up, it resumes ibbackup and wait until it finishes copying the transactions done while the backup was done. Then, the tables are unlocked, the slave is started (if the option --safe-slave-backup was used) and the connection with the server is closed. Then, it removes the xtrabackup_suspended_2 file and permits xtrabackup to exit.

It will also create the following files in the directory of the backup:

xtrabackup_checkpoints
containing the LSN and the type of backup;
xtrabackup_binlog_info
containing the position of the binary log at the moment of backing up;
xtrabackup_binlog_pos_innodb
containing the position of the binary log at the moment of backing up relative to InnoDB transactions;
xtrabackup_slave_info
containing the MySQL binlog position of the master server in a replication setup via SHOW SLAVE STATUS if the --slave-info option is passed;
backup-my.cnf
containing only the my.cnf options required for the backup. For example, innodb_data_file_path, innodb_log_files_in_group, innodb_log_file_size, innodb_fast_checksum, innodb_page_size, innodb_log_block_size;
xtrabackup_binary
containing the binary used for the backup;
mysql-stderr
containing the STDERR of mysqld during the process and
mysql-stdout
containing the STDOUT of the server.

Finally, the binary log position will be printed to STDERR and innobackupex will exit returning 0 if all went OK.

Note that the STDERR of innobackupex is not written in any file. You will have to redirect it to a file, e.g., innobackupex Options 2> backupout.log.

Restoring a backup

To restore a backup with innobackupex the --copy-back option must be used.

innobackupex will read from the my.cnf the variables datadir, innodb_data_home_dir, innodb_data_file_path, innodb_log_group_home_dir and check that the directories exist.

It will copy the MyISAM tables, indexes, etc. (.frm, .MRG, .MYD, .MYI, .TRG, .TRN, .ARM, .ARZ, .CSM, .CSV, par and .opt files) first, InnoDB tables and indexes next and the log files at last. It will preserve file's attributes when copying them, you may have to change the files' ownership to mysql before starting the database server, as they will be owned by the user who created the backup.

Alternatively, the --move-back option may be used to restore a backup. This option is similar to --copy-back with the only difference that instead of copying files it moves them to their target locations. As this option removes backup files, it must be used with caution. It is useful in cases when there is not enough free disk space to hold both data files and their backup copies.

References

The innobackupex Option Reference

This page documents all of the command-line options for the innobackupex Perl script.

Options

--apply-log
Prepare a backup in BACKUP-DIR by applying the transaction log file named xtrabackup_logfile located in the same directory. Also, create new transaction logs. The InnoDB configuration is read from the file backup-my.cnf created by innobackupex when the backup was made. innobackupex --apply-log uses InnoDB configuration from backup-my.cnf by default, or from --defaults-file, if specified. InnoDB configuration in this context means server variables that affect data format, i.e. innodb_page_size, innodb_log_block_size, etc. Location-related variables, like innodb_log_group_home_dir or innodb_data_file_path are always ignored by --apply-log, so preparing a backup always works with data files from the backup directory, rather than any external ones.
--backup-locks
This option controls if backup locks should be used instead of FLUSH TABLES WITH READ LOCK on the backup stage. The option has no effect when backup locks are not supported by the server. This option is enabled by default, disable with --no-backup-locks.
--close-files
Do not keep files opened. This option is passed directly to xtrabackup. When xtrabackup opens tablespace it normally doesn't close its file handle in order to handle the DDL operations correctly. However, if the number of tablespaces is really huge and can not fit into any limit, there is an option to close file handles once they are no longer accessed. Percona XtraBackup can produce inconsistent backups with this option enabled. Use at your own risk.
--compact
Create a compact backup with all secondary index pages omitted. This option is passed directly to xtrabackup. See the xtrabackup documentation for details.
--compress
This option instructs xtrabackup to compress backup copies of InnoDB data files. It is passed directly to the xtrabackup child process. See the xtrabackup documentation for details.
--compress-threads=#
This option specifies the number of worker threads that will be used for parallel compression. It is passed directly to the xtrabackup child process. See the xtrabackup documentation for details.
--compress-chunk-size=#
This option specifies the size of the internal working buffer for each compression thread, measured in bytes. It is passed directly to the xtrabackup child process. The default value is 64K. See the xtrabackup documentation for details.
--copy-back
Copy all the files in a previously made backup from the backup directory to their original locations.
--databases=LIST
This option specifies the list of databases that innobackupex should back up. The option accepts a string argument or path to file that contains the list of databases to back up. The list is of the form "databasename1[.table_name1] databasename2[.table_name2] . . .". If this option is not specified, all databases containing MyISAM and InnoDB tables will be backed up. Please make sure that --databases contains all of the InnoDB databases and tables, so that all of the innodb.frm files are also backed up. In case the list is very long, this can be specified in a file, and the full path of the file can be specified instead of the list. (See option --tables-file.)
--decompress
Decompresses all files with the .qp extension in a backup previously made with the --compress option. The innobackupex --parallel option will allow multiple files to be decrypted and/or decompressed simultaneously. In order to decompress, the qpress utility MUST be installed and accessable within the path. This process will remove the original compressed/encrypted files and leave the results in the same location.
--decrypt=ENCRYPTION-ALGORITHM
Decrypts all files with the .xbcrypt extension in a backup previously made with --encrypt option. The innobackupex --parallel option will allow multiple files to be decrypted and/or decompressed simultaneously.
--defaults-file=[MY.CNF]
This option accepts a string argument that specifies what file to read the default MySQL options from. It is also passed directly to xtrabackup 's defaults-file option. See the xtrabackup documentation for details.
--defaults-extra-file=[MY.CNF]
This option specifies what extra file to read the default MySQL options from before the standard defaults-file. The option accepts a string argument. It is also passed directly to xtrabackup's --defaults-extra-file option. See the xtrabackup documentation for details.
--defaults-group=GROUP-NAME
This option accepts a string argument that specifies the group which should be read from the configuration file. This is needed if you use mysqld_multi. This can also be used to indicate groups other than mysqld and xtrabackup.
--encrypt=ENCRYPTION_ALGORITHM
This option instructs xtrabackup to encrypt backup copies of InnoDB data files using the algorithm specified in the ENCRYPTION_ALGORITHM. It is passed directly to the xtrabackup child process. See the xtrabackup documentation for more details.
--encrypt-key=ENCRYPTION_KEY
This option instructs xtrabackup to use the given ENCRYPTION_KEY when using the --encrypt option. It is passed directly to the xtrabackup child process. See the xtrabackup documentation for more details.
--encrypt-key-file=ENCRYPTION_KEY_FILE
This option instructs xtrabackup to use the encryption key stored in the given ENCRYPTION_KEY_FILE when using the --encrypt option. It is passed directly to the xtrabackup child process. See the xtrabackup documentation for more details.
--encrypt-threads=#
This option specifies the number of worker threads that will be used for parallel encryption. It is passed directly to the xtrabackup child process. See the xtrabackup documentation for more details.
--encrypt-chunk-size=#
This option specifies the size of the internal working buffer for each encryption thread, measured in bytes. It is passed directly to the xtrabackup child process. See the xtrabackup documentation for more details.
--export
This option is passed directly to xtrabackup --export option. It enables exporting individual tables for import into another server. See the xtrabackup documentation for details.
--extra-lsndir=DIRECTORY
This option accepts a string argument that specifies the directory in which to save an extra copy of the xtrabackup_checkpoints file. It is passed directly to xtrabackup's --extra-lsndir option. See the xtrabackup documentation for details.
--force-non-empty-directories
When specified, it makes innobackupex --copy-back option or innobackupex --move-back option transfer files to non-empty directories. No existing files will be overwritten. If --copy-back or --move-back has to copy a file from the backup directory which already exists in the destination directory, it will still fail with an error.
--galera-info
This options creates the xtrabackup_galera_info file which contains the local node state at the time of the backup. Option should be used when performing the backup of Percona-XtraDB-Cluster. Has no effect when backup locks are used to create the backup.
--help
This option displays a help screen and exits.
--history=NAME
This option enables the tracking of backup history in the PERCONA_SCHEMA.xtrabackup_history table. An optional history series name may be specified that will be placed with the history record for the current backup being taken.
--host=HOST
This option accepts a string argument that specifies the host to use when connecting to the database server with TCP/IP. It is passed to the mysql child process without alteration. See mysql --help for details.
--ibbackup=IBBACKUP-BINARY
This option accepts a string argument that specifies which xtrabackup binary should be used. The string should be the command used to run Percona XtraBackup. The option can be useful if the xtrabackup binary is not in your search path or working directory and the database server is not accessible at the moment. If this option is not specified, innobackupex attempts to determine the binary to use automatically. By default, xtrabackup is the command used. When option --apply-log is specified, the binary is used whose name is in the file xtrabackup_binary in the backup directory, if that file exists, or will attempt to autodetect it. However, if --copy-back or --move-back is used, xtrabackup is used unless other is specified.
--include=REGEXP
This option is a regular expression to be matched against table names in databasename.tablename format. It is passed directly to xtrabackup's xtrabackup --tables option. See the xtrabackup documentation for details.
--incremental
This option tells xtrabackup to create an incremental backup, rather than a full one. It is passed to the xtrabackup child process. When this option is specified, either --incremental-lsn or --incremental-basedir can also be given. If neither option is given, option --incremental-basedir is passed to xtrabackup by default, set to the first timestamped backup directory in the backup base directory.
--incremental-basedir=DIRECTORY
This option accepts a string argument that specifies the directory containing the full backup that is the base dataset for the incremental backup. It is used with the --incremental option.
--incremental-dir=DIRECTORY
This option accepts a string argument that specifies the directory where the incremental backup will be combined with the full backup to make a new full backup. It is used with the --incremental option.
--incremental-history-name=NAME
This option specifies the name of the backup series stored in the PERCONA_SCHEMA.xtrabackup_history history record to base an incremental backup on. Percona Xtrabackup will search the history table looking for the most recent (highest innodb_to_lsn), successful backup in the series and take the to_lsn value to use as the starting lsn for the incremental backup. This will be mutually exclusive with innobackupex --incremental-history-uuid,:option:innobackupex --incremental-basedir and innobackupex --incremental-lsn. If no valid lsn can be found (no series by that name, no successful backups by that name) xtrabackup will return with an error. It is used with the innobackupex --incremental option.
--incremental-history-uuid=UUID
This option specifies the UUID of the specific history record stored in the PERCONA_SCHEMA.xtrabackup_history to base an incremental backup on. innobackupex --incremental-history-name,:optionL`innobackupex --incremental-basedir` and innobackupex --incremental-lsn. If no valid lsn can be found (no success record with that uuid) xtrabackup will return with an error. It is used with the innobackupex --incremental option.
--incremental-lsn=LSN
This option accepts a string argument that specifies the log sequence number (LSN) to use for the incremental backup. It is used with the --incremental option. It is used instead of specifying --incremental-basedir. For databases created by MySQL and Percona Server 5.0-series versions, specify the as two 32-bit integers in high:low format. For databases created in 5.1 and later, specify the LSN as a single 64-bit integer.
--kill-long-queries-timeout=SECONDS
This option specifies the number of seconds innobackupex waits between starting FLUSH TABLES WITH READ LOCK and killing those queries that block it. Default is 0 seconds, which means innobackupex will not attempt to kill any queries. In order to use this option xtrabackup user should have PROCESS and SUPER privileges.
--kill-long-query-type=all|select
This option specifies which types of queries should be killed to unblock the global lock. Default is "all".
--lock-wait-timeout=SECONDS
This option specifies time in seconds that innobackupex should wait for queries that would block FLUSH TABLES WITH READ LOCK before running it. If there are still such queries when the timeout expires, innobackupex terminates with an error. Default is 0, in which case innobackupex does not wait for queries to complete and starts FLUSH TABLES WITH READ LOCK immediately.
--lock-wait-threshold=SECONDS
This option specifies the query run time threshold which is used by innobackupex to detect long-running queries with a non-zero value of innobackupex --lock-wait-timeout. FLUSH TABLES WITH READ LOCK`` is not started until such long-running queries exist. This option has no effect if --lock-wait-timeout is 0. Default value is 60 seconds.
--lock-wait-query-type=all|update
This option specifies which types of queries are allowed to complete before innobackupex will issue the global lock. Default is all.
--log-copy-interval=#
This option specifies time interval between checks done by log copying thread in milliseconds.
--move-back
Move all the files in a previously made backup from the backup directory to their original locations. As this option removes backup files, it must be used with caution.
--no-lock
Use this option to disable table lock with FLUSH TABLES WITH READ LOCK. Use it only if ALL your tables are InnoDB and you DO NOT CARE about the binary log position of the backup. This option shouldn't be used if there are any DDL statements being executed or if any updates are happening on non-InnoDB tables (this includes the system MyISAM tables in the mysql database), otherwise it could lead to an inconsistent backup. If you are considering to use --no-lock because your backups are failing to acquire the lock, this could be because of incoming replication events preventing the lock from succeeding. Please try using --safe-slave-backup to momentarily stop the replication slave thread, this may help the backup to succeed and you then don't need to resort to using this option. xtrabackup_binlog_info is not created when --no-lock option is used (because SHOW MASTER STATUS may be inconsistent), but under certain conditions xtrabackup_binlog_pos_innodb can be used instead to get consistent binlog coordinates as described in working_with_binlogs.
--no-timestamp
This option prevents creation of a time-stamped subdirectory of the BACKUP-ROOT-DIR given on the command line. When it is specified, the backup is done in BACKUP-ROOT-DIR instead.
--no-version-check
This option disables the version check which is enabled by the --version-check option.
--parallel=NUMBER-OF-THREADS
This option accepts an integer argument that specifies the number of threads the xtrabackup child process should use to back up files concurrently. Note that this option works on file level, that is, if you have several .ibd files, they will be copied in parallel. If your tables are stored together in a single tablespace file, it will have no effect. This option will allow multiple files to be decrypted and/or decompressed simultaneously. In order to decompress, the qpress utility MUST be installed and accessable within the path. This process will remove the original compressed/encrypted files and leave the results in the same location. It is passed directly to xtrabackup's xtrabackup --parallel option. See the xtrabackup documentation for details
--password=PASSWORD
This option accepts a string argument specifying the password to use when connecting to the database. It is passed to the mysql child process without alteration. See mysql --help for details.
--port=PORT
This option accepts a string argument that specifies the port to use when connecting to the database server with TCP/IP. It is passed to the mysql child process. It is passed to the mysql child process without alteration. See mysql --help for details.
--rebuild-indexes
This option only has effect when used together with the --apply-log option and is passed directly to xtrabackup. When used, makes xtrabackup rebuild all secondary indexes after applying the log. This option is normally used to prepare compact backups. See the xtrabackup documentation for more information.
--rebuild-threads=NUMBER-OF-THREADS
This option only has effect when used together with the --apply-log and --rebuild-indexes option and is passed directly to xtrabackup. When used, xtrabackup processes tablespaces in parallel with the specified number of threads when rebuilding indexes. See the xtrabackup documentation for more information.
--redo-only
This option should be used when preparing the base full backup and when merging all incrementals except the last one. It is passed directly to xtrabackup's xtrabackup --apply-log-only option. This forces xtrabackup to skip the "rollback" phase and do a "redo" only. This is necessary if the backup will have incremental changes applied to it later. See the xtrabackup documentation for details.
--rsync
Uses the rsync utility to optimize local file transfers. When this option is specified, innobackupex uses rsync to copy all non-InnoDB files instead of spawning a separate cp for each file, which can be much faster for servers with a large number of databases or tables. This option cannot be used together with --stream.
--safe-slave-backup
Stop slave SQL thread and wait to start backup until Slave_open_temp_tables in SHOW STATUS is zero. If there are no open temporary tables, the backup will take place, otherwise the SQL thread will be started and stopped until there are no open temporary tables. The backup will fail if Slave_open_temp_tables does not become zero after --safe-slave-backup-timeout seconds. The slave SQL thread will be restarted when the backup finishes.
--safe-slave-backup-timeout=SECONDS
How many seconds --safe-slave-backup` should wait for Slave_open_temp_tables to become zero. Defaults to 300 seconds.
--scpopt = SCP-OPTIONS
This option accepts a string argument that specifies the command line options to pass to scp when the option --remost-host is specified. If the option is not specified, the default options are -Cp -c arcfour.
--slave-info
This option is useful when backing up a replication slave server. It prints the binary log position and name of the master server. It also writes this information to the xtrabackup_slave_info file as a CHANGE MASTER command. A new slave for this master can be set up by starting a slave server on this backup and issuing a CHANGE MASTER command with the binary log position saved in the xtrabackup_slave_info file.
--socket
This option accepts a string argument that specifies the socket to use when connecting to the local database server with a UNIX domain socket. It is passed to the mysql child process without alteration. See mysql --help for details.
--sshopt=SSH-OPTIONS
This option accepts a string argument that specifies the command line options to pass to ssh when the option --remost-host is specified.
--stream=STREAMNAME
This option accepts a string argument that specifies the format in which to do the streamed backup. The backup will be done to STDOUT in the specified format. Currently, supported formats are tar and xbstream. Uses xbstream, which is available in Percona XtraBackup distributions. If you specify a path after this option, it will be interpreted as the value of tmpdir
--tables-file=FILE
This option accepts a string argument that specifies the file in which there are a list of names of the form database.table, one per line. The option is passed directly to xtrabackup 's --tables-file option.
--throttle=IOS
This option accepts an integer argument that specifies the number of I/O operations (i.e., pairs of read+write) per second. It is passed directly to xtrabackup's xtrabackup --throttle option. NOTE: This option works only during the backup phase, ie. it will not work with innobackupex --apply-log and innobackupex --copy-back options.
--tmpdir=DIRECTORY
This option accepts a string argument that specifies the location where a temporary file will be stored. It may be used when --stream is specified. For these options, the transaction log will first be stored to a temporary file, before streaming or copying to a remote host. This option specifies the location where that temporary file will be stored. If the option is not specified, the default is to use the value of tmpdir read from the server configuration. innobackupex is passing the tmpdir value specified in my.cnf as the --target-dir option to the xtrabackup binary. Both [mysqld] and [xtrabackup] groups are read from my.cnf. If there is tmpdir in both, then the value being used depends on the order of those group in my.cnf.
--use-memory=#
This option accepts a string argument that specifies the amount of memory in bytes for xtrabackup to use for crash recovery while preparing a backup. Multiples are supported providing the unit (e.g. 1MB, 1M, 1GB, 1G). It is used only with the option --apply-log. It is passed directly to xtrabackup's xtrabackup --use-memory option. See the xtrabackup documentation for details.
--user=USER
This option accepts a string argument that specifies the user (i.e., the MySQL username used when connecting to the server) to login as, if that's not the current user. It is passed to the mysql child process without alteration. See mysql --help for details.
--version
This option displays the innobackupex version and copyright notice and then exits.
--version-check
When this option is specified, innobackupex will perform a version check against the server on the backup stage after creating a server connection.

Author

Percona LLC and/or its affiliates

Info

February 04, 2016 2.2.9 Percona XtraBackup