mfsmaster.cfg man page
mfsmaster.cfg — main configuration file for mfsmaster
The file mfsmaster.cfg contains configuration of LizardFS metadata server process.
OPTION = VALUE
Lines starting with # character are ignored.
Current personality of this instance of metadata server. Valid values are master, shadow and ha-cluster-managed. If installation is managed by an HA cluster the only valid value is ha-cluster-managed, otherwise the only valid values are master and shadow, in which case only one metadata server in LizardFS shall have master personality.
PERSONALITY = master means that this instance of metadata server acts as main metadata server govering all file system metadata modifications.
PERSONALITY = shadow means that this instance of metadata server acts as backup metadata server ready for immediate deployment as new master in case of current master failure.
Metadata server personality can be changed at any moment as long as one changes personality from shadow to master, changing personality the other way around is forbidden.
PERSONALITY = ha-cluster-managed means that this instance is managed by HA cluster, server runs in shadow mode as long as its not remotly promoted to master.
where to store metadata files and lock file
user to run daemon as
group to run daemon as (optional - if empty then default user group will be used)
name of process to place in syslog messages (default is mfsmaster)
whether to perform mlockall() to avoid swapping out mfsmaster process (default is 0, i.e. no)
nice level to run daemon with (default is -19 if possible; note: process must be started as root to increase priority)
alternative name of mfsexports.cfg file
alternative name of mfstopology.cfg file
alternative name of mfsgoals.cfg file
If a client mountpoint has a local chunkserver, and a given chunk happens to reside locally, then mfsmaster will list the local chunkserver first. However, when the local client mount is issuing many read(s)/write(s), to many local chunks, these requests can overload the local chunkserver and disk subsystem. Setting this to 0(the default is 1) means that remote chunkservers will be considered as equivalent to the local chunkserver.
This is useful when the network is faster than the disk, and when there is high-IO load on the client mountpoints.
number of metadata change log files (default is 50)
number of previous metadata files to be kept (default is 1)
when this option is set (equals 1) master will try to recover metadata from changelog when it is being started after a crash; otherwise it will refuse to start and mfsmetarestore should be used to recover the metadata (default is 0)
DEPRECATED - see OPERATIONS_DELAY_INIT
DEPRECATED - see OPERATIONS_DELAY_DISCONNECT
initial delay in seconds before starting chunk operations (default is 300)
chunk operations delay in seconds after chunkserver disconnection (default is 3600)
IP address to listen on for metalogger connections (* means any)
port to listen on for metalogger connections (default is 9419)
how many seconds of change logs have to be preserved in memory (default is 600; note: logs are stored in blocks of 5k lines, so sometimes real number of seconds may be little bigger; zero disables extra logs storage)
IP address to listen on for chunkserver connections (* means any)
port to listen on for chunkserver connections (default is 9420)
IP address to listen on for client (mount) connections (* means any)
port to listen on for client (mount) connections (default is 9421)
IP address to listen on for tapeserver connections (* means any)
Port to listen on for tapeserver connections (default is 9424)
Chunks loop shouldn’t check more chunks per seconds than given number (default is 100000)
Chunks loop will check all chunks in specified time (default is 300) unless CHUNKS_LOOP_MAX_CPS will force slower execution.
Time in milliseconds between chunks loop execution (default is 1000).
Hard limit on CPU usage by chunks loop (percentage value, default is 60).
Soft maximum number of chunks to delete on one chunkserver (default is 10)
Hard maximum number of chunks to delete on one chunkserver (default is 25)
Maximum number of chunks to replicate to one chunkserver (default is 2)
Maximum number of chunks to replicate from one chunkserver (default is 10)
Percentage of endangered chunks that should be replicated with high priority. Example: when set to 0.2, up to 20% of chunks served in one turn would be extracted from endangered priority queue. When set to 1 (max), no other chunks would be processed as long as there are any endangered chunks in the queue (not advised) (default is 0, i.e. there is no overhead for prioritizing endangered chunks).
Max capacity of endangered chunks queue. This value can limit memory usage of master server if there are lots of endangered chunks in the system. This value is ignored if ENDANGERED_CHUNKS_PRIORITY is set to 0. (default is 1Mi, i.e. no more than 1Mi chunks will be kept in a queue).
A maximum difference between disk usage on chunkservers that doesn’t trigger chunk rebalancing (default is 0.1, i.e. 10%).
When balancing disk usage, allow moving chunks between servers with different labels (default is 0, i.e. chunks will be moved only between servers with the same label).
Reject mfsmounts older than 1.6.0 (0 or 1, default is 0). Note that mfsexports access control is NOT used for those old clients.
Configuration of global I/O limits (default is no I/O limiting)
How often mountpoints will request bandwidth allocations under constant, predictable load (default is 0.1)
After inactivity, no waiting is required to transfer the amount of data equivalent to normal data flow over the period of that many milliseconds (default is 250)
how often metadata checksum shall be sent to backup servers (default is: every 50 metadata updates)
how fast should metadata be recalculated in background (default : 100 objects per function call)
should checksum verification be disabled while applying changelog
when this option is set to 1 inode access time is not updated on every access, otherwise (when set to 0) it is updated (default is 0)
minimal time in seconds between metadata dumps caused by requests from shadow masters (default is 1800)
Time in seconds for which client session data (e.g. list of open files) should be sustained in the master server after connection with the client was lost. Values between 60 and 604800 (one week) are accepted. (default is 86400)
When this option is set to 1 Berkley DB is used for storing file/directory names in file (DATA_PATH/name_storage.db). By default all strings are kept in system memory. (default is 0)
Size of memory cache (in MB) for file/directory names used by Berkeley DB storage. (default is 10)
When this option is set to 1, process of selecting chunkservers for chunks will try to avoid using those that share the same ip. (default is 0)
minimum number of required redundant chunk parts that can be lost before chunk becomes endangered (default is 0)
This option can be used to specify initial number of snapshotted nodes that will be atomically cloned before enqueuing the task for execution in fixed-sized batches. (default is 1000)
This option specifies the maximum initial batch size set for snapshot request. (default is 10000)
FILE_TEST_LOOP_MIN_TIME Test files loop will try to check all files in specified time in seconds (default is 3600). It’s possible for the loop to take more time if the master server is busy or the machine doesn’t have enough processing power to make all the needed calculations.
Options below are mandatory for all Shadow instances:
address of the host running LizardFS metadata server that currently acts as master
port number where LizardFS metadata server currently running as master listens for connections from 'shadow’s and metaloggers (default is 9420)
delay in seconds before trying to reconnect to metadata server after disconnection (default is 1)
timeout (in seconds) for metadata server connections (default is 60)
When set, percentage of load will be added to chunkserver disk usage to determine most fitting chunkserver. Heavy loaded chunkservers will be picked for operations less frequently. (default is 0, correct values are in range from 0 to 0.5)
Chunks in master are tested in loop. Speed (or frequency) is regulated by two options CHUNKS_LOOP_MIN_TIME and CHUNKS_LOOP_MAX_CPS. First defines minimal time of the loop and second maximal number of chunk tests per second. Typically at the beginning, when number of chunks is small, time is constant, regulated by CHUNK_LOOP_MIN_TIME, but when number of chunks became bigger then time of loop can increase according to CHUNKS_LOOP_MAX_CPS.
Deletion limits are defined as soft and hard limit. When number of chunks to delete increases from loop to loop then current limit can be temporary increased above soft limit, but never above hard limit.
Copyright 2008-2009 Gemius SA, 2013-2017 Skytechnology sp. z o.o.
LizardFS is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 3.
LizardFS is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with LizardFS. If not, see http://www.gnu.org/licenses/.
mfsmaster(8), mfsexports.cfg(5), mfstopology.cfg(5)
globaliolimits.cfg(5), mfsexports.cfg(5), mfsgoals.cfg(5), mfsmaster(8), mfstopology.cfg(5).