[gpfsug-discuss] mmperfmon report some "null" data
Dorigo Alvise (PSI)
alvise.dorigo at psi.ch
Tue Sep 11 10:05:23 BST 2018
Dear experts,
during a intensive writing into a GPFS FS (~9.5 GB/s), if I run mmperfmon to collect performance data I get many "null" strings instead of real data::
[root at sf-dss-1 ~]# date;mmperfmon query 'sf-dssio-.*.psi.ch|GPFSNSDFS|RAW|gpfs_nsdfs_bytes_written' --short --number-buckets 10 -b 1
Tue Sep 11 10:57:06 CEST 2018
Legend:
1: sf-dssio-1.psi.ch|GPFSNSDFS|RAW|gpfs_nsdfs_bytes_written
2: sf-dssio-2.psi.ch|GPFSNSDFS|RAW|gpfs_nsdfs_bytes_written
Row Timestamp _1 _2
1 2018-09-11-10:56:57 4135583744 4329193472
2 2018-09-11-10:56:58 4799332352 4697755648
3 2018-09-11-10:56:59 4799332352 4697755648
4 2018-09-11-10:57:00 null null
5 2018-09-11-10:57:01 null null
6 2018-09-11-10:57:02 null null
7 2018-09-11-10:57:03 null null
8 2018-09-11-10:57:04 null null
9 2018-09-11-10:57:05 null null
10 2018-09-11-10:57:06 null null
Even worse if I reduce the number of buckets:
[root at sf-dss-1 ~]# date;mmperfmon query 'sf-dssio-.*.psi.ch|GPFSNSDFS|RAW|gpfs_nsdfs_bytes_written' --short --number-buckets 5 -b 1
Tue Sep 11 10:59:26 CEST 2018
Legend:
1: sf-dssio-1.psi.ch|GPFSNSDFS|RAW|gpfs_nsdfs_bytes_written
2: sf-dssio-2.psi.ch|GPFSNSDFS|RAW|gpfs_nsdfs_bytes_written
Row Timestamp _1 _2
1 2018-09-11-10:59:21 null null
2 2018-09-11-10:59:22 null null
3 2018-09-11-10:59:23 null null
4 2018-09-11-10:59:24 null null
5 2018-09-11-10:59:25 null null
To get real data the number of buckets must be at least 6, but sometime it is better to set it to 10 otherwise there's the risk to get only "null" data anyway.
The question is: which particular configuration can be wrong in my mmperfmon's configuration file (see below for the dump of "config show") that produces those null data ?
My system is a Lenovo DSS-G220 updated to version dss-g-2.0a (gpfs version 4.2.3-7).
thanks,
Alvise
------------------------------------
cephMon = "/opt/IBM/zimon/CephMonProxy"
cephRados = "/opt/IBM/zimon/CephRadosProxy"
colCandidates = "sf-dss-1", "daas-mon.psi.ch"
colRedundancy = 2
collectors = {
host = ""
port = "4739"
}
config = "/opt/IBM/zimon/ZIMonSensors.cfg"
ctdbstat = ""
daemonize = T
hostname = ""
ipfixinterface = "0.0.0.0"
logfile = "/var/log/zimon/ZIMonSensors.log"
loglevel = "info"
mmcmd = "/opt/IBM/zimon/MMCmdProxy"
mmdfcmd = "/opt/IBM/zimon/MMDFProxy"
mmpmon = "/opt/IBM/zimon/MmpmonSockProxy"
piddir = "/var/run"
release = "4.2.3-4"
sensors = {
name = "CPU"
period = 5
},
{
name = "Load"
period = 5
},
{
name = "Memory"
period = 5
},
{
name = "Network"
period = 1
},
{
name = "Netstat"
period = 0
},
{
name = "Diskstat"
period = 0
},
{
name = "DiskFree"
period = 60
restrict = "sf-dss-1.psi.ch"
},
{
name = "Infiniband"
period = 1
},
{
name = "GPFSDisk"
period = 1
restrict = "nsdNodes"
},
{
name = "GPFSFilesystem"
period = 1
},
{
name = "GPFSNSDDisk"
period = 1
restrict = "nsdNodes"
},
{
name = "GPFSNSDFS"
period = 1
restrict = "nsdNodes"
},
{
name = "GPFSPoolIO"
period = 1
},
{
name = "GPFSVFS"
period = 1
},
{
name = "GPFSIOC"
period = 1
},
{
name = "GPFSVIO"
period = 1
},
{
name = "GPFSPDDisk"
period = 1
restrict = "nsdNodes"
},
{
name = "GPFSvFLUSH"
period = 1
},
{
name = "GPFSNode"
period = 1
},
{
name = "GPFSNodeAPI"
period = 1
},
{
name = "GPFSFilesystemAPI"
period = 1
},
{
name = "GPFSLROC"
period = 1
},
{
name = "GPFSCHMS"
period = 1
},
{
name = "GPFSAFM"
period = 5
},
{
name = "GPFSAFMFS"
period = 5
},
{
name = "GPFSAFMFSET"
period = 5
},
{
name = "GPFSRPCS"
period = 1
},
{
name = "GPFSWaiters"
period = 5
},
{
name = "GPFSFilesetQuota"
period = 60
restrict = "sf-dss-1"
},
{
name = "GPFSFileset"
period = 60
restrict = "sf-dss-1"
},
{
name = "GPFSPool"
period = 60
restrict = "sf-dss-1"
},
{
name = "GPFSDiskCap"
period = 0
}
smbstat = ""
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180911/d4d4e839/attachment.htm>
More information about the gpfsug-discuss
mailing list