cephfs-top provides top(1) like functionality for Ceph Filesystem. Various client metrics are displayed and updated in realtime.
Ceph Metadata Servers periodically send client metrics to Ceph Manager. Stats plugin in Ceph Manager provides an interface to fetch these metrics.
Cluster: Ceph cluster to connect. Defaults to ceph.
Id: Client used to connect to Ceph cluster. Defaults to fstop.
Perform a selftest. This mode performs a sanity check of stats module.
- --conffile [CONFFILE]
Path to cluster configuration file
- -d [DELAY], --delay [DELAY]
Refresh interval in seconds (default: 1)
Dump the metrics to stdout
- --dumpfs <fs_name>
Dump the metrics of the given filesystem to stdout
Descriptions of Fields
cap hit rate
dentry lease rate
number of opened files
number of pinned caps
number of opened inodes
total size of read IOs
total size of write IOs
average size of read IOs
average size of write IOs
speed of read IOs compared with the last refresh
speed of write IOs compared with the last refresh
average read latency
standard deviation (variance) for read latency
average write latency
standard deviation (variance) for write latency
average metadata latency
standard deviation (variance) for metadata latency
cephfs-top is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to the Ceph documentation at http://ceph.com/ for more information.
2010-2023, Inktank Storage, Inc. and contributors. Licensed under Creative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0)