rbd-replay [ options ] replay_file
rbd-replay is a utility for replaying rados block device (RBD) workloads.
- -c ceph.conf, --conf ceph.conf
Use ceph.conf configuration file instead of the default /etc/ceph/ceph.conf to determine monitor addresses during startup.
- -p pool, --pool pool
Interact with the given pool. Defaults to 'rbd'.
Multiplies inter-request latencies. Default: 1.
Only replay non-destructive requests.
- --map-image rule
Add a rule to map image names in the trace to image names in the replay cluster. A rule of image1@snap1=image2@snap2 would map snap1 of image1 to snap2 of image2.
Experimental Dump performance counters to standard out before an image is closed. Performance counters may be dumped multiple times if multiple images are closed, or if the same image is opened and closed multiple times. Performance counters and their meaning may change between versions.
To replay workload1 as fast as possible:
rbd-replay --latency-multiplier=0 workload1
To replay workload1 but use test_image instead of prod_image:
rbd-replay --map-image=prod_image=test_image workload1
rbd-replay is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to the Ceph documentation at http://ceph.com/docs for more information.
2010-2021, Inktank Storage, Inc. and contributors. Licensed under Creative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0)