[gpfsug-discuss] gpfs filesets question
J. Eric Wonderley
eric.wonderley at vt.edu
Thu Apr 16 18:36:35 BST 2020
Hi Fred:
I do. I have 3 pools. system, ssd data pool(fc_ssd400G) and a spinning
disk pool(fc_8T).
I want to think the ssd_data_pool is empty at the moment and the system
pool is ssd and only contains metadata.
[root at cl005 ~]# mmdf home -P fc_ssd400G
disk disk size failure holds holds free KB
free KB
name in KB group metadata data in full blocks
in fragments
--------------- ------------- -------- -------- ----- --------------------
-------------------
Disks in storage pool: fc_ssd400G (Maximum disk size allowed is 97 TB)
r10f1e8 1924720640 1001 No Yes 1924644864 (100%)
9728 ( 0%)
r10f1e7 1924720640 1001 No Yes 1924636672 (100%)
17408 ( 0%)
r10f1e6 1924720640 1001 No Yes 1924636672 (100%)
17664 ( 0%)
r10f1e5 1924720640 1001 No Yes 1924644864 (100%)
9728 ( 0%)
r10f6e8 1924720640 1001 No Yes 1924644864 (100%)
9728 ( 0%)
r10f1e9 1924720640 1001 No Yes 1924644864 (100%)
9728 ( 0%)
r10f6e9 1924720640 1001 No Yes 1924644864 (100%)
9728 ( 0%)
------------- --------------------
-------------------
(pool total) 13473044480 13472497664 (100%)
83712 ( 0%)
More or less empty.
Interesting...
On Thu, Apr 16, 2020 at 1:11 PM Frederick Stock <stockf at us.ibm.com> wrote:
> Do you have more than one GPFS storage pool in the system? If you do and
> they align with the filesets then that might explain why moving data from
> one fileset to another is causing increased IO operations.
>
> Fred
> __________________________________________________
> Fred Stock | IBM Pittsburgh Lab | 720-430-8821
> stockf at us.ibm.com
>
>
>
> ----- Original message -----
> From: "J. Eric Wonderley" <eric.wonderley at vt.edu>
> Sent by: gpfsug-discuss-bounces at spectrumscale.org
> To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Cc:
> Subject: [EXTERNAL] [gpfsug-discuss] gpfs filesets question
> Date: Thu, Apr 16, 2020 12:32 PM
>
> I have filesets setup in a filesystem...looks like:
> [root at cl005 ~]# mmlsfileset home -L
> Filesets in file system 'home':
> Name Id RootInode ParentId Created
> InodeSpace MaxInodes AllocInodes Comment
> root 0 3 -- Tue Jun 30
> 07:54:09 2015 0 402653184 320946176 root fileset
> hess 1 543733376 0 Tue Jun 13
> 14:56:13 2017 0 0 0
> predictHPC 2 1171116 0 Thu Jan 5
> 15:16:56 2017 0 0 0
> HYCCSIM 3 544258049 0 Wed Jun 14
> 10:00:41 2017 0 0 0
> socialdet 4 544258050 0 Wed Jun 14
> 10:01:02 2017 0 0 0
> arc 5 1171073 0 Thu Jan 5
> 15:07:09 2017 0 0 0
> arcadm 6 1171074 0 Thu Jan 5
> 15:07:10 2017 0 0 0
>
> I beleive these are dependent filesets. Dependent on the root fileset.
> Anyhow a user wants to move a large amount of data from one fileset to
> another. Would this be a metadata only operation? He has attempted to
> small amount of data and has noticed some thrasing.
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20200416/49e59bce/attachment.htm>
More information about the gpfsug-discuss
mailing list