ocf_heartbeat_mdraid - Man Page

Manages Linux software RAID (MD) devices on shared storage

Synopsis

mdraid [start | stop | monitor | meta-data | validate-all]

Description

This resource agent manages Linux software RAID (MD) devices on a shared storage medium ensuring that non-clustered MD arrays are prohibited from starting cloned (which would cause data corruption (e.g., on raid6 arrays) unless forced (see force_clones parameter). Clustered MD RAID layouts (see below) will be discovered and allowed cloning by default; no need to set force_clones.

It uses mdadm(8) to start, stop, and monitor the MD devices.

Supported clustered (i.e., clonable active-active) arrays are linear, raid0, and clustered raid1/raid10 (i.e. mdadm(8) created with

--bitmap=clustered).

Supported Parameters

mdadm_conf

The MD RAID configuration file (e.g., /etc/mdadm.conf).

(required, string, no default)

md_dev

MD array block device to use (e.g., /dev/md0 or /dev/md/3). With shared access to the array's storage, this should preferably be a clustered raid1 or raid10 array created with --bitmap=clustered, assuming its resource will be cloned (i.e., active-active access).

Be sure to disable auto-assembly for the resource-managed arrays!

(required, string, no default)

force_stop

If processes or kernel threads are using the array, it cannot be stopped. We will try to stop processes, first by sending TERM and then, if that doesn't help in seconds, using KILL. The lsof(8) program is required to get the list of array users. Of course, the kernel threads cannot be stopped this way. If the processes are critical for data integrity, then set this parameter to false. Note that in that case the stop operation will fail and the node will be fenced.

(optional, boolean, default false)

wait_for_udev

Wait until udevd creates a device in the start operation. On a normally loaded host this should happen quickly, but you may be unlucky. If you are not using udev set this to "no".

(optional, boolean, default true)

force_clones

Activating the same, non-clustered MD RAID array (i.e. single-host raid1/4/5/6/10) on multiple nodes at the same time will result in data corruption and thus is forbidden by default.

A safe example could be an (exotic) array that is only named identically across all nodes, but is in fact based on distinct (non-shared) storage.

Only set this to "true" if you know what you are doing!

(optional, boolean, default false)

Supported Actions

This resource agent supports the following actions (operations):

start

Starts the resource. Suggested minimum timeout: 20s.

stop

Stops the resource. Suggested minimum timeout: 20s.

monitor

Performs a detailed status check. Suggested minimum timeout: 20s. Suggested interval: 10s.

validate-all

Performs a validation of the resource configuration. Suggested minimum timeout: 5s.

meta-data

Retrieves resource agent metadata (internal use only). Suggested minimum timeout: 5s.

Example CRM Shell

The following is an example configuration for a mdraid resource using the crm(8) shell:

primitive p_mdraid ocf:heartbeat:mdraid \
  params \
    mdadm_conf=string \
    md_dev=string \
  op monitor depth="0" timeout="20s" interval="10s"

Example PCS

The following is an example configuration for a mdraid resource using pcs(8)

pcs resource create p_mdraid ocf:heartbeat:mdraid \
  mdadm_conf=string \
  md_dev=string \
  op monitor OCF_CHECK_LEVEL="0" timeout="20s" interval="10s"

See Also

http://clusterlabs.org/

Author

ClusterLabs contributors (see the resource agent source for information about individual authors)

Info

03/02/2021 resource-agents UNKNOWN OCF resource agents