From chris.schlipalius at pawsey.org.au Mon Aug 3 08:01:00 2020
From: chris.schlipalius at pawsey.org.au (Chris Schlipalius)
Date: Mon, 03 Aug 2020 15:01:00 +0800
Subject: [gpfsug-discuss] Spectrum Scale Usergroup Australia online event
(August 27) announced.
Message-ID: <794431F1-98D4-4A8A-AE41-50AC8FB3BA72@pawsey.org.au>
Hi All
Please see the event link and details on the Spectrum Scale Usergroup organisation website - events. https://www.spectrumscaleug.org/event/spectrum-scale-usergroup-australia/
Regards,
Chris Schlipalius
Team Lead, Data Storage Infrastructure, Supercomputing Platforms, Pawsey Supercomputing Centre (CSIRO)
1 Bryce Avenue
Kensington WA 6151
Australia
Tel +61 8 6436 8815
Email chris.schlipalius at pawsey.org.au
Web www.pawsey.org.au
From jonathan.buzzard at strath.ac.uk Mon Aug 3 12:32:47 2020
From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard)
Date: Mon, 3 Aug 2020 12:32:47 +0100
Subject: [gpfsug-discuss] DSS-G support period
Message-ID: <415b09f3-65fb-3f77-124f-3008a47a810c@strath.ac.uk>
I notice that there is now a 3.x version of the DSS-G software that is
based on RHEL8, which took me a bit by surprise as the ESS still seems
to be on RHEL7.
I did however notice that 2.6b which is still based on RHEL7 was
released after 3.0a.
So this brings the question how much longer will the 2.x/RHEL7 versions
of the DSS-G be supported? The reason being that the
installation/upgrade instructions for the 3.x software state that the
xCAT server must be running RHEL8, which more or less means a reinstall
which is a big task as it's also used to deploy the compute cluster.
JAB.
--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
From stef.coene at docum.org Mon Aug 3 15:07:43 2020
From: stef.coene at docum.org (Stef Coene)
Date: Mon, 3 Aug 2020 16:07:43 +0200
Subject: [gpfsug-discuss] Backend corruption
Message-ID: <4943adc3-408e-12c2-2204-b1e3c32ae133@docum.org>
Hi,
We have a GPFS file system which uses, among other storage, a V5000 as
backend.
There was an error in the fire detection alarm in the datacenter and a
fire alarm was triggered.
The result was that the V5000 had a lot of broken disks. Most of the
disks recovered fine after a reseat, but some data is corrupted on the
V5000.
This means that for 22MB of data, the V5000 returns a read error to the
GPFS.
We migrated most of the data to an disks but there is still 165 GB left
on the V5000 pool.
When we try to remove the disks with mmdeldisk, it fails after a while
and places some of the disks as down.
It generated a file with inodes, this an example of a 2 lines:
9168519 0:0 0 1 1
exposed illreplicated illplaced REGULAR_FILE Error: 218 Input/output
error
9251611 0:0 0 1 1
exposed illreplicated REGULAR_FILE Error: 218 Input/output error
How can I get a list of files that uses data of the V5000 pool?
The data is written by CommVault. When I have a list of files, I can
determine the impact on the application.
Stef
From UWEFALKE at de.ibm.com Mon Aug 3 16:21:32 2020
From: UWEFALKE at de.ibm.com (Uwe Falke)
Date: Mon, 3 Aug 2020 17:21:32 +0200
Subject: [gpfsug-discuss] Backend corruption
In-Reply-To: <4943adc3-408e-12c2-2204-b1e3c32ae133@docum.org>
References: <4943adc3-408e-12c2-2204-b1e3c32ae133@docum.org>
Message-ID:
Hi, Stef,
if just that V5000 has provided the storage for one of your pools
entirely, and if your metadata are still incorrupted, a inode scan with a
suited policy should yield the list of files on that pool.
If I am not mistaken, the list policy could look like
RULE 'list_v5000' LIST 'v5000_filelist' FROM POOL
paut it into a (policy) file, run that by mmapplypolicy against the file
system in question, it should produce a file listing in
/tmp/v5000_filelist. If it doesn#T work exactly like that (I might have
made one or mor mistakes), check out the information lifycacle section in
the scal admin guide.
If the prereqs for the above are not met, you need to run more expensive
investigations (using tsdbfs for all block addresses on v5000-provided
NSDs).
Mit freundlichen Gr??en / Kind regards
Dr. Uwe Falke
IT Specialist
Global Technology Services / Project Services Delivery / High Performance
Computing
+49 175 575 2877 Mobile
Rathausstr. 7, 09111 Chemnitz, Germany
uwefalke at de.ibm.com
IBM Services
IBM Data Privacy Statement
IBM Deutschland Business & Technology Services GmbH
Gesch?ftsf?hrung: Dr. Thomas Wolter, Sven Schooss
Sitz der Gesellschaft: Ehningen
Registergericht: Amtsgericht Stuttgart, HRB 17122
From: Stef Coene
To: gpfsug-discuss at spectrumscale.org
Date: 03/08/2020 16:07
Subject: [EXTERNAL] [gpfsug-discuss] Backend corruption
Sent by: gpfsug-discuss-bounces at spectrumscale.org
Hi,
We have a GPFS file system which uses, among other storage, a V5000 as
backend.
There was an error in the fire detection alarm in the datacenter and a
fire alarm was triggered.
The result was that the V5000 had a lot of broken disks. Most of the
disks recovered fine after a reseat, but some data is corrupted on the
V5000.
This means that for 22MB of data, the V5000 returns a read error to the
GPFS.
We migrated most of the data to an disks but there is still 165 GB left
on the V5000 pool.
When we try to remove the disks with mmdeldisk, it fails after a while
and places some of the disks as down.
It generated a file with inodes, this an example of a 2 lines:
9168519 0:0 0 1 1
exposed illreplicated illplaced REGULAR_FILE Error: 218 Input/output
error
9251611 0:0 0 1 1
exposed illreplicated REGULAR_FILE Error: 218 Input/output error
How can I get a list of files that uses data of the V5000 pool?
The data is written by CommVault. When I have a list of files, I can
determine the impact on the application.
Stef
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=fTuVGtgq6A14KiNeaGfNZzOOgtHW5Lm4crZU6lJxtB8&m=HhbxQEWLNTXFDCFT5LDpMD4YvYTUEdl6Nt6IgjdVlNo&s=fxsoDddp4OUnP7gORNUOnAmrnHPIU57OQMnraXEEO0k&e=
From stef.coene at docum.org Tue Aug 4 07:41:26 2020
From: stef.coene at docum.org (Stef Coene)
Date: Tue, 4 Aug 2020 08:41:26 +0200
Subject: [gpfsug-discuss] Backend corruption
In-Reply-To:
References: <4943adc3-408e-12c2-2204-b1e3c32ae133@docum.org>
Message-ID: <660a8604-92b1-cda5-05c8-c768ff01e9a0@docum.org>
Hi,
I tried to use a policy to find out what files are located on the broken
disks.
But this is not finding any files or directories (I cleaned some of the
output):
[I] GPFS Current Data Pool Utilization in KB and %
Pool_Name KB_Occupied KB_Total Percent_Occupied
V500003 173121536 69877104640 0.247751444%
[I] 29609813 of 198522880 inodes used: 14.915063%.
[I] Loaded policy rules from test.rule.
rule 'ListRule'
list 'ListName'
from pool 'V500003'
[I] Directories scan: 28649029 files, 960844 directories, 0 other
objects, 0 'skipped' files and/or errors.
[I] Inodes scan: 28649029 files, 960844 directories, 0 other objects, 0
'skipped' files and/or errors.
[I] Summary of Rule Applicability and File Choices:
Rule# Hit_Cnt KB_Hit Chosen KB_Chosen
KB_Ill Rule
0 0 0 0 0
0 RULE 'ListRule' LIST 'ListName' FROM POOL 'V500003'
[I] Filesystem objects with no applicable rules: 29609873.
[I] A total of 0 files have been migrated, deleted or processed by an
EXTERNAL EXEC/script;
0 'skipped' files and/or errors.
So the policy is not finding any files but there is still some data on
the V50003 pool?
Stef
On 2020-08-03 17:21, Uwe Falke wrote:
> Hi, Stef,
>
> if just that V5000 has provided the storage for one of your pools
> entirely, and if your metadata are still incorrupted, a inode scan with a
> suited policy should yield the list of files on that pool.
> If I am not mistaken, the list policy could look like
>
> RULE 'list_v5000' LIST 'v5000_filelist' FROM POOL
>
> paut it into a (policy) file, run that by mmapplypolicy against the file
> system in question, it should produce a file listing in
> /tmp/v5000_filelist. If it doesn#T work exactly like that (I might have
> made one or mor mistakes), check out the information lifycacle section in
> the scal admin guide.
>
> If the prereqs for the above are not met, you need to run more expensive
> investigations (using tsdbfs for all block addresses on v5000-provided
> NSDs).
>
> Mit freundlichen Gr??en / Kind regards
>
> Dr. Uwe Falke
> IT Specialist
> Global Technology Services / Project Services Delivery / High Performance
> Computing
> +49 175 575 2877 Mobile
> Rathausstr. 7, 09111 Chemnitz, Germany
> uwefalke at de.ibm.com
>
> IBM Services
>
> IBM Data Privacy Statement
>
> IBM Deutschland Business & Technology Services GmbH
> Gesch?ftsf?hrung: Dr. Thomas Wolter, Sven Schooss
> Sitz der Gesellschaft: Ehningen
> Registergericht: Amtsgericht Stuttgart, HRB 17122
>
>
>
> From: Stef Coene
> To: gpfsug-discuss at spectrumscale.org
> Date: 03/08/2020 16:07
> Subject: [EXTERNAL] [gpfsug-discuss] Backend corruption
> Sent by: gpfsug-discuss-bounces at spectrumscale.org
>
>
>
> Hi,
>
> We have a GPFS file system which uses, among other storage, a V5000 as
> backend.
> There was an error in the fire detection alarm in the datacenter and a
> fire alarm was triggered.
> The result was that the V5000 had a lot of broken disks. Most of the
> disks recovered fine after a reseat, but some data is corrupted on the
> V5000.
>
> This means that for 22MB of data, the V5000 returns a read error to the
> GPFS.
>
> We migrated most of the data to an disks but there is still 165 GB left
> on the V5000 pool.
>
> When we try to remove the disks with mmdeldisk, it fails after a while
> and places some of the disks as down.
> It generated a file with inodes, this an example of a 2 lines:
> 9168519 0:0 0 1 1
> exposed illreplicated illplaced REGULAR_FILE Error: 218 Input/output
> error
> 9251611 0:0 0 1 1
> exposed illreplicated REGULAR_FILE Error: 218 Input/output error
>
>
> How can I get a list of files that uses data of the V5000 pool?
> The data is written by CommVault. When I have a list of files, I can
> determine the impact on the application.
>
>
> Stef
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=fTuVGtgq6A14KiNeaGfNZzOOgtHW5Lm4crZU6lJxtB8&m=HhbxQEWLNTXFDCFT5LDpMD4YvYTUEdl6Nt6IgjdVlNo&s=fxsoDddp4OUnP7gORNUOnAmrnHPIU57OQMnraXEEO0k&e=
>
>
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
From UWEFALKE at de.ibm.com Tue Aug 4 10:31:29 2020
From: UWEFALKE at de.ibm.com (Uwe Falke)
Date: Tue, 4 Aug 2020 11:31:29 +0200
Subject: [gpfsug-discuss] Backend corruption
In-Reply-To: <660a8604-92b1-cda5-05c8-c768ff01e9a0@docum.org>
References: <4943adc3-408e-12c2-2204-b1e3c32ae133@docum.org>
<660a8604-92b1-cda5-05c8-c768ff01e9a0@docum.org>
Message-ID:
Hi Stef,
> So the policy is not finding any files but there is still some data on
> the V50003 pool?
So it looks to me. You attempted to empty the pool before, didn't you?
Maybe something has been confused that way internally, or the policy finds
only readable files and the corrupted ones have an internal flag to be
unreadable ... If you know faulty (but occupied per current metadata)
disk addresses, you could use mmfileid to find the inode which should have
used that block.
But that's all just guesswork. I think someone who knows exactly what
Scale does in such situations (restriping from faulty storage) should be
able to tell what's up in your system. If you don't find an answer here
I'd suggest you open a case with IBM support.
Mit freundlichen Gr??en / Kind regards
Dr. Uwe Falke
IT Specialist
Global Technology Services / Project Services Delivery / High Performance
Computing
+49 175 575 2877 Mobile
Rathausstr. 7, 09111 Chemnitz, Germany
uwefalke at de.ibm.com
IBM Services
IBM Data Privacy Statement
IBM Deutschland Business & Technology Services GmbH
Gesch?ftsf?hrung: Dr. Thomas Wolter, Sven Schooss
Sitz der Gesellschaft: Ehningen
Registergericht: Amtsgericht Stuttgart, HRB 17122
From: Stef Coene
To: gpfsug-discuss at spectrumscale.org
Date: 04/08/2020 08:41
Subject: [EXTERNAL] Re: [gpfsug-discuss] Backend corruption
Sent by: gpfsug-discuss-bounces at spectrumscale.org
Hi,
I tried to use a policy to find out what files are located on the broken
disks.
But this is not finding any files or directories (I cleaned some of the
output):
[I] GPFS Current Data Pool Utilization in KB and %
Pool_Name KB_Occupied KB_Total Percent_Occupied
V500003 173121536 69877104640 0.247751444%
[I] 29609813 of 198522880 inodes used: 14.915063%.
[I] Loaded policy rules from test.rule.
rule 'ListRule'
list 'ListName'
from pool 'V500003'
[I] Directories scan: 28649029 files, 960844 directories, 0 other
objects, 0 'skipped' files and/or errors.
[I] Inodes scan: 28649029 files, 960844 directories, 0 other objects, 0
'skipped' files and/or errors.
[I] Summary of Rule Applicability and File Choices:
Rule# Hit_Cnt KB_Hit Chosen KB_Chosen
KB_Ill Rule
0 0 0 0 0
0 RULE 'ListRule' LIST 'ListName' FROM POOL 'V500003'
[I] Filesystem objects with no applicable rules: 29609873.
[I] A total of 0 files have been migrated, deleted or processed by an
EXTERNAL EXEC/script;
0 'skipped' files and/or errors.
So the policy is not finding any files but there is still some data on
the V50003 pool?
Stef
On 2020-08-03 17:21, Uwe Falke wrote:
> Hi, Stef,
>
> if just that V5000 has provided the storage for one of your pools
> entirely, and if your metadata are still incorrupted, a inode scan with
a
> suited policy should yield the list of files on that pool.
> If I am not mistaken, the list policy could look like
>
> RULE 'list_v5000' LIST 'v5000_filelist' FROM POOL
>
> paut it into a (policy) file, run that by mmapplypolicy against the
file
> system in question, it should produce a file listing in
> /tmp/v5000_filelist. If it doesn#T work exactly like that (I might have
> made one or mor mistakes), check out the information lifycacle section
in
> the scal admin guide.
>
> If the prereqs for the above are not met, you need to run more expensive
> investigations (using tsdbfs for all block addresses on v5000-provided
> NSDs).
>
> Mit freundlichen Gr??en / Kind regards
>
> Dr. Uwe Falke
> IT Specialist
> Global Technology Services / Project Services Delivery / High
Performance
> Computing
> +49 175 575 2877 Mobile
> Rathausstr. 7, 09111 Chemnitz, Germany
> uwefalke at de.ibm.com
>
> IBM Services
>
> IBM Data Privacy Statement
>
> IBM Deutschland Business & Technology Services GmbH
> Gesch?ftsf?hrung: Dr. Thomas Wolter, Sven Schooss
> Sitz der Gesellschaft: Ehningen
> Registergericht: Amtsgericht Stuttgart, HRB 17122
>
>
>
> From: Stef Coene
> To: gpfsug-discuss at spectrumscale.org
> Date: 03/08/2020 16:07
> Subject: [EXTERNAL] [gpfsug-discuss] Backend corruption
> Sent by: gpfsug-discuss-bounces at spectrumscale.org
>
>
>
> Hi,
>
> We have a GPFS file system which uses, among other storage, a V5000 as
> backend.
> There was an error in the fire detection alarm in the datacenter and a
> fire alarm was triggered.
> The result was that the V5000 had a lot of broken disks. Most of the
> disks recovered fine after a reseat, but some data is corrupted on the
> V5000.
>
> This means that for 22MB of data, the V5000 returns a read error to the
> GPFS.
>
> We migrated most of the data to an disks but there is still 165 GB left
> on the V5000 pool.
>
> When we try to remove the disks with mmdeldisk, it fails after a while
> and places some of the disks as down.
> It generated a file with inodes, this an example of a 2 lines:
> 9168519 0:0 0 1 1
> exposed illreplicated illplaced REGULAR_FILE Error: 218 Input/output
> error
> 9251611 0:0 0 1 1
> exposed illreplicated REGULAR_FILE Error: 218 Input/output error
>
>
> How can I get a list of files that uses data of the V5000 pool?
> The data is written by CommVault. When I have a list of files, I can
> determine the impact on the application.
>
>
> Stef
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
>
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=fTuVGtgq6A14KiNeaGfNZzOOgtHW5Lm4crZU6lJxtB8&m=HhbxQEWLNTXFDCFT5LDpMD4YvYTUEdl6Nt6IgjdVlNo&s=fxsoDddp4OUnP7gORNUOnAmrnHPIU57OQMnraXEEO0k&e=
>
>
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
>
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=fTuVGtgq6A14KiNeaGfNZzOOgtHW5Lm4crZU6lJxtB8&m=i7ODb4dy2VmFYbY7bAt3ZQm1nei0XrC8DFSkR50RDKA&s=upGSItHNs6Ahvct2PeM9vWdz8JfyaChlmvd3dzR4KWI&e=
>
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=fTuVGtgq6A14KiNeaGfNZzOOgtHW5Lm4crZU6lJxtB8&m=i7ODb4dy2VmFYbY7bAt3ZQm1nei0XrC8DFSkR50RDKA&s=upGSItHNs6Ahvct2PeM9vWdz8JfyaChlmvd3dzR4KWI&e=
From S.J.Thompson at bham.ac.uk Wed Aug 5 11:32:29 2020
From: S.J.Thompson at bham.ac.uk (Simon Thompson)
Date: Wed, 5 Aug 2020 10:32:29 +0000
Subject: [gpfsug-discuss] DSS-G support period
In-Reply-To: <415b09f3-65fb-3f77-124f-3008a47a810c@strath.ac.uk>
References: <415b09f3-65fb-3f77-124f-3008a47a810c@strath.ac.uk>
Message-ID:
3.0 isn't supported on first gen DSS-G servers (x3650m5) so if you have those, you'd hope that it would continue to be supported.
We're just looking to upgrade ours to 3.0 (and no, you don't need to upgrade xcat server to 8.0 for that).
Simon
?On 03/08/2020, 12:33, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Jonathan Buzzard" wrote:
I notice that there is now a 3.x version of the DSS-G software that is
based on RHEL8, which took me a bit by surprise as the ESS still seems
to be on RHEL7.
I did however notice that 2.6b which is still based on RHEL7 was
released after 3.0a.
So this brings the question how much longer will the 2.x/RHEL7 versions
of the DSS-G be supported? The reason being that the
installation/upgrade instructions for the 3.x software state that the
xCAT server must be running RHEL8, which more or less means a reinstall
which is a big task as it's also used to deploy the compute cluster.
JAB.
--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
From jonathan.buzzard at strath.ac.uk Wed Aug 5 12:03:05 2020
From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard)
Date: Wed, 5 Aug 2020 12:03:05 +0100
Subject: [gpfsug-discuss] DSS-G support period
In-Reply-To:
References: <415b09f3-65fb-3f77-124f-3008a47a810c@strath.ac.uk>
Message-ID: <21a76584-c199-8972-92c0-ebb54edab961@strath.ac.uk>
On 05/08/2020 11:32, Simon Thompson wrote:
> 3.0 isn't supported on first gen DSS-G servers (x3650m5) so if you
> have those, you'd hope that it would continue to be supported.
Or it's now old hardware and out of support. Here I have some shinny new
storage to sell you :-)
>
> We're just looking to upgrade ours to 3.0 (and no, you don't need to
> upgrade xcat server to 8.0 for that).
That's what the upgrade documents I am reading say. I have open a PDF
named DSS-G-3.0b.Upgrade_Procedure.pdf downloaded from Lenovo.
If you navigate to section 2.4 on page 10 it says
Since xCAT is required for DSS-G deployment, the xCAT server must be
installed and the xCAT management software configured. For existing
xCAT servers, it is recommended to update the OS and xCAT software
to match the current LeSI levels; see section 2.3. The software
components needed for DSS-G deployment must then be copied and
unpacked onto the xCAT server.
So it says to update the xCAT server OS and the xCAT server. Right lets
see what section 2.3 on page nine says about versions. It talks about
RHEL 8.1 and a very specific version of xCAT to be downloaded from
Lenovo. No mention of RHEL7 whatsoever.
Consequently as I read it DSS-G 3.0 requires upgrading the xCAT server
OS to RHEL 8.1. Or at the very least is "recommended" which in my
experience translates to "argh sorry unsupported; click" should you need
to raise a support call. So not upgrading is in effect not optional.
I would note that all previous upgrade documentation in the 2.x series
that I have read has said that the xCAT server OS should be upgraded to
match that of the OS you are about to deploy the the DSS-G servers. So
matching the xCAT OS to the DSS-G OS is not something new.
Of course previously this was just a "yum update" so not really a big
deal. On the other hand switching to RHEL8 is a big deal :-(
JAB.
--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
From S.J.Thompson at bham.ac.uk Wed Aug 5 12:20:39 2020
From: S.J.Thompson at bham.ac.uk (Simon Thompson)
Date: Wed, 5 Aug 2020 11:20:39 +0000
Subject: [gpfsug-discuss] DSS-G support period
In-Reply-To: <21a76584-c199-8972-92c0-ebb54edab961@strath.ac.uk>
References: <415b09f3-65fb-3f77-124f-3008a47a810c@strath.ac.uk>
<21a76584-c199-8972-92c0-ebb54edab961@strath.ac.uk>
Message-ID: <1C606946-1B80-4D2C-840C-97A4926454DF@bham.ac.uk>
> it is recommended to update
Recommended. I said you don't *need* to. Of course YMMV and sure if you think you will have issues with support then don't do it.
In terms of LeSI levels, these are published on the Lenovo website (https://support.lenovo.com/us/en/solutions/HT510136). In fact we just deployed a new DSS-G system (3.0a) using our xCAT servers that are most definitely running CentOS7 (and not the Lenovo fork of xCAT). Of course we did validate the install process on some hardware, but that was mostly to test our automated integration and update processes.
LeSI is however a complicated thing ... and I recommend you speak to Lenovo support __
Simon
?On 05/08/2020, 12:03, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Jonathan Buzzard" wrote:
On 05/08/2020 11:32, Simon Thompson wrote:
> 3.0 isn't supported on first gen DSS-G servers (x3650m5) so if you
> have those, you'd hope that it would continue to be supported.
Or it's now old hardware and out of support. Here I have some shinny new
storage to sell you :-)
>
> We're just looking to upgrade ours to 3.0 (and no, you don't need to
> upgrade xcat server to 8.0 for that).
That's what the upgrade documents I am reading say. I have open a PDF
named DSS-G-3.0b.Upgrade_Procedure.pdf downloaded from Lenovo.
If you navigate to section 2.4 on page 10 it says
Since xCAT is required for DSS-G deployment, the xCAT server must be
installed and the xCAT management software configured. For existing
xCAT servers, it is recommended to update the OS and xCAT software
to match the current LeSI levels; see section 2.3. The software
components needed for DSS-G deployment must then be copied and
unpacked onto the xCAT server.
So it says to update the xCAT server OS and the xCAT server. Right lets
see what section 2.3 on page nine says about versions. It talks about
RHEL 8.1 and a very specific version of xCAT to be downloaded from
Lenovo. No mention of RHEL7 whatsoever.
Consequently as I read it DSS-G 3.0 requires upgrading the xCAT server
OS to RHEL 8.1. Or at the very least is "recommended" which in my
experience translates to "argh sorry unsupported; click" should you need
to raise a support call. So not upgrading is in effect not optional.
I would note that all previous upgrade documentation in the 2.x series
that I have read has said that the xCAT server OS should be upgraded to
match that of the OS you are about to deploy the the DSS-G servers. So
matching the xCAT OS to the DSS-G OS is not something new.
Of course previously this was just a "yum update" so not really a big
deal. On the other hand switching to RHEL8 is a big deal :-(
JAB.
--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
From jonathan.buzzard at strath.ac.uk Thu Aug 6 10:54:09 2020
From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard)
Date: Thu, 6 Aug 2020 10:54:09 +0100
Subject: [gpfsug-discuss] mmbackup
Message-ID: <87a8ec2b-4657-4b72-ac6f-385b40f6b6a7@strath.ac.uk>
I upgraded the TSM client for security reasons to 8.1.10 from 8.1.3 last
week. It would now appear that my scheduled mmbackup is not running, and
trying by hand gives
[root at tsm ~]# mmbackup dssgfs -s /opt/mmbackup
--------------------------------------------------------
mmbackup: Backup of /gpfs begins at Thu Aug 6 10:53:26 BST 2020.
--------------------------------------------------------
/usr/lpp/mmfs/bin/mmbackup: line 1427: -: more tokens expected
What gives?
JAB.
--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
From stockf at us.ibm.com Thu Aug 6 12:15:33 2020
From: stockf at us.ibm.com (Frederick Stock)
Date: Thu, 6 Aug 2020 11:15:33 +0000
Subject: [gpfsug-discuss] mmbackup
In-Reply-To: <87a8ec2b-4657-4b72-ac6f-385b40f6b6a7@strath.ac.uk>
References: <87a8ec2b-4657-4b72-ac6f-385b40f6b6a7@strath.ac.uk>
Message-ID:
An HTML attachment was scrubbed...
URL:
From scale at us.ibm.com Thu Aug 6 14:35:30 2020
From: scale at us.ibm.com (IBM Spectrum Scale)
Date: Thu, 6 Aug 2020 09:35:30 -0400
Subject: [gpfsug-discuss] mmbackup
In-Reply-To:
References: <87a8ec2b-4657-4b72-ac6f-385b40f6b6a7@strath.ac.uk>
Message-ID:
This has been fixed in Spectrum Scale 4.2.3.20, 5.0.4.2, and 5.0.5.0.
Regards, The Spectrum Scale (GPFS) team
------------------------------------------------------------------------------------------------------------------
If you feel that your question can benefit other users of Spectrum Scale
(GPFS), then please post it to the public IBM developerWroks Forum at
https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479.
If your query concerns a potential software error in Spectrum Scale (GPFS)
and you have an IBM software maintenance contract please contact
1-800-237-5511 in the United States or your local IBM Service Center in
other countries.
The forum is informally monitored as time permits and should not be used
for priority messages to the Spectrum Scale (GPFS) team.
From: "Frederick Stock"
To: gpfsug-discuss at spectrumscale.org
Cc: gpfsug-discuss at spectrumscale.org
Date: 08/06/2020 07:14 AM
Subject: [EXTERNAL] Re: [gpfsug-discuss] mmbackup
Sent by: gpfsug-discuss-bounces at spectrumscale.org
Jonathan could you please provide the version of Scale you are running, and
if possible what line 1427 is in the version of mmbackup.sh that is on the
node where the problem occurred?
Fred
__________________________________________________
Fred Stock | IBM Pittsburgh Lab | 720-430-8821
stockf at us.ibm.com
----- Original message -----
From: Jonathan Buzzard
Sent by: gpfsug-discuss-bounces at spectrumscale.org
To: gpfsug main discussion list
Cc:
Subject: [EXTERNAL] [gpfsug-discuss] mmbackup
Date: Thu, Aug 6, 2020 5:54 AM
I upgraded the TSM client for security reasons to 8.1.10 from 8.1.3 last
week. It would now appear that my scheduled mmbackup is not running, and
trying by hand gives
[root at tsm ~]# mmbackup dssgfs -s /opt/mmbackup
--------------------------------------------------------
mmbackup: Backup of /gpfs begins at Thu Aug 6 10:53:26 BST 2020.
--------------------------------------------------------
/usr/lpp/mmfs/bin/mmbackup: line 1427: -: more tokens expected
What gives?
JAB.
--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=WugQbEyCmOTKp8RoBCSQYoILcZf-qB8qV_aU_HaAN4E&s=jt6cvptoAA0r05ifhnA_mxVmiad5ZXiSRNlyfKYDVa8&e=
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL:
From S.J.Thompson at bham.ac.uk Fri Aug 7 14:07:08 2020
From: S.J.Thompson at bham.ac.uk (Simon Thompson)
Date: Fri, 7 Aug 2020 13:07:08 +0000
Subject: [gpfsug-discuss] Mmvdisk and mmchdisk
Message-ID: <4E83FB23-359E-4524-BC4F-11C2D5B9252F@bham.ac.uk>
I have a question about mmvdisk and mmchdisk.
We have some data on a bunch of vdisks which were migrated from legacy to mmvdisk commands.
We need to move the data off those vdisks to another storage system.
Our plan was:
Create a new pool on with the vdisks on the new storage system
Change the default placement policy to point to the new pool
Use MIGRATE policy to move file-sets over to the new vdisks in the new pool
How do we then go about stopping the old vdisks and checking the data is all off them? Is using mmchdisk safe to use with vdisks, or is there some equivalent mmvdisk command we should be using?
I?m thinking maybe what we do is add a temporary vdisk on the new system in the same pool as the older one thought with a different failure group, and then empty the disks in classical style before deleting them.
Why ? (before someone asks), the older system is a hybrid SSD+HDD model and we want to add shelves to it. And online expansion isn?t supported/requires recabling as well. So we move all the data to the new system, and then we want to *selectively* move data back to the older one - not all though ? hence the new pools.
I?m assuming also we can remove/delete vdisks from a vdiskset from specific recovery groups. The migration from legacy mode looks to have bunched disks across different RGs into the same vdiskset even though they have different failure groups applied to them.
Thanks
Simon
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jonathan.buzzard at strath.ac.uk Fri Aug 14 10:29:49 2020
From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard)
Date: Fri, 14 Aug 2020 10:29:49 +0100
Subject: [gpfsug-discuss] mmbackup
In-Reply-To:
References: <87a8ec2b-4657-4b72-ac6f-385b40f6b6a7@strath.ac.uk>
Message-ID:
On 06/08/2020 14:35, IBM Spectrum Scale wrote:
> This has been fixed in Spectrum Scale 4.2.3.20, 5.0.4.2, and 5.0.5.0.
>
> Regards, The Spectrum Scale (GPFS) team
Thanks, that explains my issue.
I am running DSS-G, and the latest DSS-G release only does the 5.0.4.3-2
version of GPFS. However I seem to have access to the Spectrum Scale
Data Access versions from the web portal too where I can download 5.0.5.1.
Question therefore is am I entitled to download that version so I can
upgrade my backup node to RHEL 7.8 while I am at it, or do I have to
stick with the versions in the DSS-G tarballs?
JAB.
--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
From S.J.Thompson at bham.ac.uk Sat Aug 15 19:24:37 2020
From: S.J.Thompson at bham.ac.uk (Simon Thompson)
Date: Sat, 15 Aug 2020 18:24:37 +0000
Subject: [gpfsug-discuss] mmbackup
In-Reply-To:
References: <87a8ec2b-4657-4b72-ac6f-385b40f6b6a7@strath.ac.uk>
Message-ID: <5E58061E-DB6B-4526-8371-ADCE8E095A6E@bham.ac.uk>
When you "web portal" it's not clear if you refer to fix central or the commercial.lenovo.com site, the client binaries are a separate set of downloads to the DSS-G bundle for the servers, from where you should be able to download 5.0.5.1 (at least I can see that there). Provided your DSS-G is under entitlement, my understanding is that you are entitled to download the client bundle supplied by Lenovo.
Simon
?On 14/08/2020, 10:30, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Jonathan Buzzard" wrote:
On 06/08/2020 14:35, IBM Spectrum Scale wrote:
> This has been fixed in Spectrum Scale 4.2.3.20, 5.0.4.2, and 5.0.5.0.
>
> Regards, The Spectrum Scale (GPFS) team
Thanks, that explains my issue.
I am running DSS-G, and the latest DSS-G release only does the 5.0.4.3-2
version of GPFS. However I seem to have access to the Spectrum Scale
Data Access versions from the web portal too where I can download 5.0.5.1.
Question therefore is am I entitled to download that version so I can
upgrade my backup node to RHEL 7.8 while I am at it, or do I have to
stick with the versions in the DSS-G tarballs?
JAB.
--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
From sandeep.patil at in.ibm.com Mon Aug 17 06:51:41 2020
From: sandeep.patil at in.ibm.com (Sandeep Ramesh)
Date: Mon, 17 Aug 2020 05:51:41 +0000
Subject: [gpfsug-discuss] Latest Technical Blogs/Papers on IBM Spectrum
Scale (Q2 2020)
In-Reply-To:
References:
Message-ID:
Dear User Group Members,
In continuation to this email thread, here are list of development
blogs/Redpaper in the past quarter . We now have over 100+ developer blogs
on Spectrum Scale/ESS. As discussed in User Groups, passing it along to
this list.
What?s New in Spectrum Scale 5.0.5?
https://community.ibm.com/community/user/storage/blogs/ismael-solis-moreno1/2020/07/06/whats-new-in-spectrum-scale-505
Implementation Guide for IBM Elastic Storage System 3000
http://www.redbooks.ibm.com/abstracts/sg248443.html?Open
Spectrum Scale File Audit Logging (FAL) and Watch Folder(WF) Document and
Demo
https://developer.ibm.com/storage/2020/05/27/spectrum-scale-file-audit-logging-fal-and-watch-folderwf-document-and-demo/
IBM Spectrum Scale with IBM QRadar - Internal Threat Detection (5 mins
Demo)
https://www.youtube.com/watch?v=Zyw84dvoFR8&t=1s
IBM Spectrum Scale Information Lifecycle Management Policies - Practical
guide
https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102642
Example: https://github.com/nhaustein/spectrum-scale-policy-scripts
IBM Spectrum Scale configuration for sudo based administration on defined
set of administrative nodes.,
https://developer.ibm.com/storage/2020/07/27/ibm-spectrum-scale-configuration-for-sudo-based-administration-on-defined-set-of-administrative-nodes/
IBM Spectrum Scale Erasure Code Edition in Stretched Cluster
https://developer.ibm.com/storage/2020/07/10/ibm-spectrum-scale-erasure-code-edition-in-streched-cluster/
IBM Spectrum Scale installation toolkit ? extended FQDN enhancement over
releases ? 5.0.5.0
https://developer.ibm.com/storage/2020/06/12/ibm-spectrum-scale-installation-toolkit-extended-fqdn-enhancement-over-releases-5-0-5-0/
IBM Spectrum Scale Security Posture with Kibana for Visualization
https://developer.ibm.com/storage/2020/05/22/ibm-spectrum-scale-security-posture-with-kibana-for-visualization/
How to Visualize IBM Spectrum Scale Security Posture on Canvas
https://developer.ibm.com/storage/2020/05/22/how-to-visualize-ibm-spectrum-scale-security-posture-on-canvas/
How to add Linux machine as Active Directory client to access IBM Spectrum
Scale??
https://developer.ibm.com/storage/2020/04/29/how-to-add-linux-machine-as-active-directory-client-to-access-ibm-spectrum-scale/
Enabling Kerberos Authentication in IBM Spectrum Scale HDFS Transparency
without Ambari
https://developer.ibm.com/storage/2020/04/17/enabling-kerberos-authentication-in-ibm-spectrum-scale-hdfs-transparency-without-ambari/
Configuring Spectrum Scale File Systems for Reliability
https://developer.ibm.com/storage/2020/04/08/configuring-spectrum-scale-file-systems-for-reliability/
Spectrum Scale Tuning for Large Linux Clusters
https://developer.ibm.com/storage/2020/04/03/spectrum-scale-tuning-for-large-linux-clusters/
Spectrum Scale Tuning for Power Architecture
https://developer.ibm.com/storage/2020/03/30/spectrum-scale-tuning-for-power-architecture/
Spectrum Scale operating system and network tuning
https://developer.ibm.com/storage/2020/03/27/spectrum-scale-operating-system-and-network-tuning/
How to have granular and selective secure data at rest and in motion for
workloads
https://developer.ibm.com/storage/2020/03/24/how-to-have-granular-and-selective-secure-data-at-rest-and-in-motion-for-workloads/
Multiprotocol File Sharing on IBM Spectrum Scalewithout an AD or LDAP
server
https://www.ibm.com/downloads/cas/AN9BR9NJ
Securing Data on Threat Detection Using IBM Spectrum Scale and IBM QRadar:
An Enhanced Cyber Resiliency Solution
http://www.redbooks.ibm.com/abstracts/redp5560.html?Open
For more : Search /browse here: https://developer.ibm.com/storage/blog
User Group Presentations:
https://www.spectrumscale.org/presentations/
From: Sandeep Ramesh/India/IBM
To: gpfsug-discuss at spectrumscale.org
Date: 03/17/2020 01:37 PM
Subject: Re: Latest Technical Blogs/Papers on IBM Spectrum Scale
(Q3 2019 - Q1 2020)
Dear User Group Members,
In continuation to this email thread, here are list of development
blogs/Redpaper in the past 2 quarters . We now have over 100+ developer
blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along
to this list.
Redpaper
HIPAA Compliance for Healthcare Workloads on IBM Spectrum Scale
http://www.redbooks.ibm.com/abstracts/redp5591.html?Open
IBM Spectrum Scale CSI Driver For Container Persistent Storage
http://www.redbooks.ibm.com/redpieces/abstracts/redp5589.html?Open
Cyber Resiliency Solution for IBM Spectrum Scale , Blueprint
http://www.redbooks.ibm.com/abstracts/redp5559.html?Open
Enhanced Cyber Security with IBM Spectrum Scale and IBM QRadar
http://www.redbooks.ibm.com/abstracts/redp5560.html?Open
Monitoring and Managing the IBM Elastic Storage Server Using the GUI
http://www.redbooks.ibm.com/abstracts/redp5471.html?Open
IBM Hybrid Solution for Scalable Data Solutions using IBM Spectrum Scale
http://www.redbooks.ibm.com/abstracts/redp5549.html?Open
IBM Spectrum Discover: Metadata Management for Deep Insight of
Unstructured Storage
http://www.redbooks.ibm.com/abstracts/redp5550.html?Open
Monitoring and Managing IBM Spectrum Scale Using the GUI
http://www.redbooks.ibm.com/abstracts/redp5458.html?Open
IBM Reference Architecture for High Performance Data and AI in Healthcare
and Life Sciences,
http://www.redbooks.ibm.com/abstracts/redp5481.html?Open
Blogs:
Why Storage and HIPAA Compliance for AI & Analytics Workloads for
Healthcare
https://developer.ibm.com/storage/2020/03/17/why-storage-and-hipaa-compliance-for-ai-analytics-workloads-for-healthcare/
Innovation via Integration ? Proactively Securing Your Unstructured Data
from Cyber Threats & Attacks
--> This was done based on your inputs (as a part of Security Survey) last
year on need for Spectrum Scale integrayion with IDS a
https://developer.ibm.com/storage/2020/02/24/innovation-via-integration-proactively-securing-your-unstructured-data-from-cyber-threats-attacks/
IBM Spectrum Scale CES HDFS Transparency support
https://developer.ibm.com/storage/2020/02/03/ces-hdfs-transparency-support/
How to set up a remote cluster with IBM Spectrum Scale ? steps,
limitations and troubleshooting
https://developer.ibm.com/storage/2020/01/27/how-to-set-up-a-remote-cluster-with-ibm-spectrum-scale-steps-limitations-and-troubleshooting/
How to use IBM Spectrum Scale with CSI Operator 1.0 on Openshift 4.2 ?
sample usage scenario with Tensorflow deployment
https://developer.ibm.com/storage/2020/01/20/how-to-use-ibm-spectrum-scale-with-csi-operator-1-0-on-openshift-4-2-sample-usage-scenario-with-tensorflow-deployment/
Achieving WORM like functionality from NFS/SMB clients for data on
Spectrum Scale
https://developer.ibm.com/storage/2020/01/10/achieving-worm-like-functionality-from-nfs-smb-clients-for-data-on-spectrum-scale/
IBM Spectrum Scale CSI driver video blogs,
https://developer.ibm.com/storage/2019/12/26/ibm-spectrum-scale-csi-driver-video-blogs/
IBM Spectrum Scale CSI Driver v1.0.0 released
https://developer.ibm.com/storage/2019/12/10/ibm-spectrum-scale-csi-driver-v1-0-0-released/
Now configure IBM? Spectrum Scale with Overlapping UNIXMAP ranges
https://developer.ibm.com/storage/2019/11/12/now-configure-ibm-spectrum-scale-with-overlapping-unixmap-ranges/
?mmadquery?, a Powerful tool helps check AD settings from Spectrum Scale
https://developer.ibm.com/storage/2019/11/11/mmadquery-a-powerful-tool-helps-check-ad-settings-from-spectrum-scale/
Spectrum Scale Data Security Modes,
https://developer.ibm.com/storage/2019/10/31/spectrum-scale-data-security-modes/
IBM Spectrum Scale for Linux on IBM Z ? What?s new in IBM Spectrum Scale
5.0.4 ?
https://developer.ibm.com/storage/2019/10/25/ibm-spectrum-scale-for-linux-on-ibm-z-whats-new-in-ibm-spectrum-scale-5-0-4/
IBM Spectrum Scale installation toolkit ? enhancements over releases ?
5.0.4.0
https://developer.ibm.com/storage/2019/10/18/ibm-spectrum-scale-installation-toolkit-enhancements-over-releases-5-0-4-0/
IBM Spectrum Scale CSI driver beta on GitHub,
https://developer.ibm.com/storage/2019/09/26/ibm-spectrum-scale-csi-driver-on-github/
Help Article: Care to be taken when configuring AD with RFC2307
https://developer.ibm.com/storage/2019/09/18/help-article-care-to-be-taken-when-configuring-ad-with-rfc2307/
IBM Spectrum Scale Erasure Code Edition (ECE): Installation Demonstration
https://developer.ibm.com/storage/2019/09/10/ibm-spectrum-scale-erasure-code-edition-ece-installation-demonstration/
For more : Search /browse here: https://developer.ibm.com/storage/blog
User Group Presentations:
https://www.spectrumscale.org/presentations/
From: Sandeep Ramesh/India/IBM
To: gpfsug-discuss at spectrumscale.org
Date: 09/03/2019 10:58 AM
Subject: Latest Technical Blogs on IBM Spectrum Scale (Q2 2019)
Dear User Group Members,
In continuation, here are list of development blogs in the this quarter
(Q2 2019). We now have over 100+ developer blogs on Spectrum Scale/ESS. As
discussed in User Groups, passing it along to the emailing list.
Redpaper : IBM Power Systems Enterprise AI Solutions (W/ SPECTRUM SCALE)
http://www.redbooks.ibm.com/redpieces/abstracts/redp5556.html?Open
IBM Spectrum Scale Erasure Code Edition (ECE): Installation Demonstration
https://www.youtube.com/watch?v=6If50EvgP-U
Blogs:
Using IBM Spectrum Scale as platform storage for running containerized
Hadoop/Spark workloads
https://developer.ibm.com/storage/2019/08/27/using-ibm-spectrum-scale-as-platform-storage-for-running-containerized-hadoop-spark-workloads/
Useful Tools for Spectrum Scale CES NFS
https://developer.ibm.com/storage/2019/07/22/useful-tools-for-spectrum-scale-ces-nfs/
How to ensure NFS uses strong encryption algorithms for secure data in
motion ?
https://developer.ibm.com/storage/2019/07/19/how-to-ensure-nfs-uses-strong-encryption-algorithms-for-secure-data-in-motion/
Introducing IBM Spectrum Scale Erasure Code Edition
https://developer.ibm.com/storage/2019/07/07/introducing-ibm-spectrum-scale-erasure-code-edition/
Spectrum Scale: Which Filesystem Encryption Algo to Consider ?
https://developer.ibm.com/storage/2019/07/01/spectrum-scale-which-filesystem-encryption-algo-to-consider/
IBM Spectrum Scale HDFS Transparency Apache Hadoop 3.1.x Support
https://developer.ibm.com/storage/2019/06/24/ibm-spectrum-scale-hdfs-transparency-apache-hadoop-3-0-x-support/
Enhanced features in Elastic Storage Server (ESS) 5.3.4
https://developer.ibm.com/storage/2019/06/19/enhanced-features-in-elastic-storage-server-ess-5-3-4/
Upgrading IBM Spectrum Scale Erasure Code Edition using installation
toolkit
https://developer.ibm.com/storage/2019/06/09/upgrading-ibm-spectrum-scale-erasure-code-edition-using-installation-toolkit/
Upgrading IBM Spectrum Scale sync replication / stretch cluster setup in
PureApp
https://developer.ibm.com/storage/2019/06/06/upgrading-ibm-spectrum-scale-sync-replication-stretch-cluster-setup/
GPFS config remote access with multiple network definitions
https://developer.ibm.com/storage/2019/05/30/gpfs-config-remote-access-with-multiple-network-definitions/
IBM Spectrum Scale Erasure Code Edition Fault Tolerance
https://developer.ibm.com/storage/2019/05/30/ibm-spectrum-scale-erasure-code-edition-fault-tolerance/
IBM Spectrum Scale for Linux on IBM Z ? What?s new in IBM Spectrum Scale
5.0.3 ?
https://developer.ibm.com/storage/2019/05/02/ibm-spectrum-scale-for-linux-on-ibm-z-whats-new-in-ibm-spectrum-scale-5-0-3/
Understanding and Solving WBC_ERR_DOMAIN_NOT_FOUND error with
Spectrum Scale
https://crk10.wordpress.com/2019/07/21/solving-the-wbc-err-domain-not-found-nt-status-none-mapped-glitch-in-ibm-spectrum-scale/
Understanding and Solving NT_STATUS_INVALID_SID issue for SMB access with
Spectrum Scale
https://crk10.wordpress.com/2019/07/24/solving-nt_status_invalid_sid-for-smb-share-access-in-ibm-spectrum-scale/
mmadquery primer (apparatus to query Active Directory from IBM
Spectrum Scale)
https://crk10.wordpress.com/2019/07/27/mmadquery-primer-apparatus-to-query-active-directory-from-ibm-spectrum-scale/
How to configure RHEL host as Active Directory Client using SSSD
https://crk10.wordpress.com/2019/07/28/configure-rhel-machine-as-active-directory-client-using-sssd/
How to configure RHEL host as LDAP client using nslcd
https://crk10.wordpress.com/2019/07/28/configure-rhel-machine-as-ldap-client-using-nslcd/
Solving NFSv4 AUTH_SYS nobody ownership issue
https://crk10.wordpress.com/2019/07/29/nfsv4-auth_sys-nobody-ownership-and-idmapd/
For more : Search /browse here: https://developer.ibm.com/storage/blog
User Group Presentations:
https://www.spectrumscale.org/presentations/
Consolidation list of all blogs and collaterals.
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Blogs%2C%20White%20Papers%20%26%20Media
From: Sandeep Ramesh/India/IBM
To: gpfsug-discuss at spectrumscale.org
Date: 04/29/2019 12:12 PM
Subject: Latest Technical Blogs on IBM Spectrum Scale (Q1 2019)
Dear User Group Members,
In continuation, here are list of development blogs in the this quarter
(Q1 2019). We now have over 100+ developer blogs on Spectrum Scale/ESS. As
discussed in User Groups, passing it along to the emailing list.
Spectrum Scale 5.0.3
https://developer.ibm.com/storage/2019/04/24/spectrum-scale-5-0-3/
IBM Spectrum Scale HDFS Transparency Ranger Support
https://developer.ibm.com/storage/2019/04/01/ibm-spectrum-scale-hdfs-transparency-ranger-support/
Integration of IBM Aspera Sync with IBM Spectrum Scale: Protecting and
Sharing Files Globally,
http://www.redbooks.ibm.com/abstracts/redp5527.html?Open
Spectrum Scale user group in Singapore, 2019
https://developer.ibm.com/storage/2019/03/14/spectrum-scale-user-group-in-singapore-2019/
7 traits to use Spectrum Scale to run container workload
https://developer.ibm.com/storage/2019/02/26/7-traits-to-use-spectrum-scale-to-run-container-workload/
Health Monitoring of IBM Spectrum Scale Cluster via External Monitoring
Framework
https://developer.ibm.com/storage/2019/01/22/health-monitoring-of-ibm-spectrum-scale-cluster-via-external-monitoring-framework/
Migrating data from native HDFS to IBM Spectrum Scale based shared storage
https://developer.ibm.com/storage/2019/01/18/migrating-data-from-native-hdfs-to-ibm-spectrum-scale-based-shared-storage/
Bulk File Creation useful for Test on Filesystems
https://developer.ibm.com/storage/2019/01/16/bulk-file-creation-useful-for-test-on-filesystems/
For more : Search /browse here: https://developer.ibm.com/storage/blog
User Group Presentations:
https://www.spectrumscale.org/presentations/
Consolidation list:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Blogs%2C%20White%20Papers%20%26%20Media
From: Sandeep Ramesh/India/IBM
To: gpfsug-discuss at spectrumscale.org
Date: 01/14/2019 06:24 PM
Subject: Latest Technical Blogs on IBM Spectrum Scale (Q4 2018)
Dear User Group Members,
In continuation, here are list of development blogs in the this quarter
(Q4 2018). We now have over 100+ developer blogs on Spectrum Scale/ESS. As
discussed in User Groups, passing it along to the emailing list.
Redpaper: IBM Spectrum Scale and IBM StoredIQ: Identifying and securing
your business data to support regulatory requirements
http://www.redbooks.ibm.com/abstracts/redp5525.html?Open
IBM Spectrum Scale Memory Usage
https://www.slideshare.net/tomerperry/ibm-spectrum-scale-memory-usage?qid=50a1dfda-3102-484f-b9d0-14b69fc4800b&v=&b=&from_search=2
Spectrum Scale and Containers
https://developer.ibm.com/storage/2018/12/20/spectrum-scale-and-containers/
IBM Elastic Storage Server Performance Graphical Visualization with
Grafana
https://developer.ibm.com/storage/2018/12/18/ibm-elastic-storage-server-performance-graphical-visualization-with-grafana/
Hadoop Performance for disaggregated compute and storage configurations
based on IBM Spectrum Scale Storage
https://developer.ibm.com/storage/2018/12/13/hadoop-performance-for-disaggregated-compute-and-storage-configurations-based-on-ibm-spectrum-scale-storage/
EMS HA in ESS LE (Little Endian) environment
https://developer.ibm.com/storage/2018/12/07/ems-ha-in-ess-le-little-endian-environment/
What?s new in ESS 5.3.2
https://developer.ibm.com/storage/2018/12/04/whats-new-in-ess-5-3-2/
Administer your Spectrum Scale cluster easily
https://developer.ibm.com/storage/2018/11/13/administer-your-spectrum-scale-cluster-easily/
Disaster Recovery using Spectrum Scale?s Active File Management
https://developer.ibm.com/storage/2018/11/13/disaster-recovery-using-spectrum-scales-active-file-management/
Recovery Group Failover Procedure of IBM Elastic Storage Server (ESS)
https://developer.ibm.com/storage/2018/10/08/recovery-group-failover-procedure-ibm-elastic-storage-server-ess/
Whats new in IBM Elastic Storage Server (ESS) Version 5.3.1 and 5.3.1.1
https://developer.ibm.com/storage/2018/10/04/whats-new-ibm-elastic-storage-server-ess-version-5-3-1-5-3-1-1/
For more : Search /browse here: https://developer.ibm.com/storage/blog
User Group Presentations:
https://www.spectrumscale.org/presentations/
Consolidation list:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Blogs%2C%20White%20Papers%20%26%20Media
From: Sandeep Ramesh/India/IBM
To: gpfsug-discuss at spectrumscale.org
Date: 10/03/2018 08:48 PM
Subject: Latest Technical Blogs on IBM Spectrum Scale (Q3 2018)
Dear User Group Members,
In continuation, here are list of development blogs in the this quarter
(Q3 2018). We now have over 100+ developer blogs on Spectrum Scale/ESS. As
discussed in User Groups, passing it along to the emailing list.
How NFS exports became more dynamic with Spectrum Scale 5.0.2
https://developer.ibm.com/storage/2018/10/02/nfs-exports-became-dynamic-spectrum-scale-5-0-2/
HPC storage on AWS (IBM Spectrum Scale)
https://developer.ibm.com/storage/2018/10/02/hpc-storage-aws-ibm-spectrum-scale/
Upgrade with Excluding the node(s) using Install-toolkit
https://developer.ibm.com/storage/2018/09/30/upgrade-excluding-nodes-using-install-toolkit/
Offline upgrade using Install-toolkit
https://developer.ibm.com/storage/2018/09/30/offline-upgrade-using-install-toolkit/
IBM Spectrum Scale for Linux on IBM Z ? What?s new in IBM Spectrum Scale
5.0.2 ?
https://developer.ibm.com/storage/2018/09/21/ibm-spectrum-scale-for-linux-on-ibm-z-whats-new-in-ibm-spectrum-scale-5-0-2/
What?s New in IBM Spectrum Scale 5.0.2 ?
https://developer.ibm.com/storage/2018/09/15/whats-new-ibm-spectrum-scale-5-0-2/
Starting IBM Spectrum Scale 5.0.2 release, the installation toolkit
supports upgrade rerun if fresh upgrade fails.
https://developer.ibm.com/storage/2018/09/15/starting-ibm-spectrum-scale-5-0-2-release-installation-toolkit-supports-upgrade-rerun-fresh-upgrade-fails/
IBM Spectrum Scale installation toolkit ? enhancements over releases ?
5.0.2.0
https://developer.ibm.com/storage/2018/09/15/ibm-spectrum-scale-installation-toolkit-enhancements-releases-5-0-2-0/
Announcing HDP 3.0 support with IBM Spectrum Scale
https://developer.ibm.com/storage/2018/08/31/announcing-hdp-3-0-support-ibm-spectrum-scale/
IBM Spectrum Scale Tuning Overview for Hadoop Workload
https://developer.ibm.com/storage/2018/08/20/ibm-spectrum-scale-tuning-overview-hadoop-workload/
Making the Most of Multicloud Storage
https://developer.ibm.com/storage/2018/08/13/making-multicloud-storage/
Disaster Recovery for Transparent Cloud Tiering using SOBAR
https://developer.ibm.com/storage/2018/08/13/disaster-recovery-transparent-cloud-tiering-using-sobar/
Your Optimal Choice of AI Storage for Today and Tomorrow
https://developer.ibm.com/storage/2018/08/10/spectrum-scale-ai-workloads/
Analyze IBM Spectrum Scale File Access Audit with ELK Stack
https://developer.ibm.com/storage/2018/07/30/analyze-ibm-spectrum-scale-file-access-audit-elk-stack/
Mellanox SX1710 40G switch MLAG configuration for IBM ESS
https://developer.ibm.com/storage/2018/07/12/mellanox-sx1710-40g-switcher-mlag-configuration/
Protocol Problem Determination Guide for IBM Spectrum Scale? ? SMB and NFS
Access issues
https://developer.ibm.com/storage/2018/07/10/protocol-problem-determination-guide-ibm-spectrum-scale-smb-nfs-access-issues/
Access Control in IBM Spectrum Scale Object
https://developer.ibm.com/storage/2018/07/06/access-control-ibm-spectrum-scale-object/
IBM Spectrum Scale HDFS Transparency Docker support
https://developer.ibm.com/storage/2018/07/06/ibm-spectrum-scale-hdfs-transparency-docker-support/
Protocol Problem Determination Guide for IBM Spectrum Scale? ? Log
Collection
https://developer.ibm.com/storage/2018/07/04/protocol-problem-determination-guide-ibm-spectrum-scale-log-collection/
Redpapers
IBM Spectrum Scale Immutability Introduction, Configuration Guidance,
and Use Cases
http://www.redbooks.ibm.com/abstracts/redp5507.html?Open
Certifications
Assessment of the immutability function of IBM Spectrum Scale Version 5.0
in accordance to US SEC17a-4f, EU GDPR Article 21 Section 1, German and
Swiss laws and regulations in collaboration with KPMG.
Certificate:
http://www.kpmg.de/bescheinigungen/RequestReport.aspx?DE968667B47544FF83F6CCDCF37E5FB5
Full assessment report:
http://www.kpmg.de/bescheinigungen/RequestReport.aspx?B290411BE1224F5A9B4D24663BCD3C5D
For more : Search /browse here: https://developer.ibm.com/storage/blog
Consolidation list:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/White%20Papers%20%26%20Media
From: Sandeep Ramesh/India/IBM
To: gpfsug-discuss at spectrumscale.org
Date: 07/03/2018 12:13 AM
Subject: Re: Latest Technical Blogs on Spectrum Scale (Q2 2018)
Dear User Group Members,
In continuation , here are list of development blogs in the this quarter
(Q2 2018). We now have over 100+ developer blogs. As discussed in User
Groups, passing it along:
IBM Spectrum Scale 5.0.1 ? Whats new in Unified File and Object
https://developer.ibm.com/storage/2018/06/15/6494/
IBM Spectrum Scale ILM Policies
https://developer.ibm.com/storage/2018/06/02/ibm-spectrum-scale-ilm-policies/
IBM Spectrum Scale 5.0.1 ? Whats new in Unified File and Object
https://developer.ibm.com/storage/2018/06/15/6494/
Management GUI enhancements in IBM Spectrum Scale release 5.0.1
https://developer.ibm.com/storage/2018/05/18/management-gui-enhancements-in-ibm-spectrum-scale-release-5-0-1/
Managing IBM Spectrum Scale services through GUI
https://developer.ibm.com/storage/2018/05/18/managing-ibm-spectrum-scale-services-through-gui/
Use AWS CLI with IBM Spectrum Scale? object storage
https://developer.ibm.com/storage/2018/05/16/use-awscli-with-ibm-spectrum-scale-object-storage/
Hadoop Storage Tiering with IBM Spectrum Scale
https://developer.ibm.com/storage/2018/05/09/hadoop-storage-tiering-ibm-spectrum-scale/
How many Files on my Filesystem?
https://developer.ibm.com/storage/2018/05/07/many-files-filesystem/
Recording Spectrum Scale Object Stats for Potential Billing like Purpose
using Elasticsearch
https://developer.ibm.com/storage/2018/05/04/spectrum-scale-object-stats-for-billing-using-elasticsearch/
New features in IBM Elastic Storage Server (ESS) Version 5.3
https://developer.ibm.com/storage/2018/04/09/new-features-ibm-elastic-storage-server-ess-version-5-3/
Using IBM Spectrum Scale for storage in IBM Cloud Private (Missed to send
earlier)
https://medium.com/ibm-cloud/ibm-spectrum-scale-with-ibm-cloud-private-8bf801796f19
Redpapers
Hortonworks Data Platform with IBM Spectrum Scale: Reference Guide for
Building an Integrated Solution
http://www.redbooks.ibm.com/redpieces/abstracts/redp5448.html,
Enabling Hybrid Cloud Storage for IBM Spectrum Scale Using Transparent
Cloud Tiering
http://www.redbooks.ibm.com/abstracts/redp5411.html?Open
SAP HANA and ESS: A Winning Combination (Update)
http://www.redbooks.ibm.com/abstracts/redp5436.html?Open
Others
IBM Spectrum Scale Software Version Recommendation Preventive Service
Planning (Updated)
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1009703,
IDC Infobrief: A Modular Approach to Genomics Infrastructure at Scale in
HCLS
https://www.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=37016937USEN&
For more : Search /browse here: https://developer.ibm.com/storage/blog
Consolidation list:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/White%20Papers%20%26%20Media
From: Sandeep Ramesh/India/IBM
To: gpfsug-discuss at spectrumscale.org
Date: 03/27/2018 05:23 PM
Subject: Re: Latest Technical Blogs on Spectrum Scale
Dear User Group Members,
In continuation , here are list of development blogs in the this quarter
(Q1 2018). As discussed in User Groups, passing it along:
GDPR Compliance and Unstructured Data Storage
https://developer.ibm.com/storage/2018/03/27/gdpr-compliance-unstructure-data-storage/
IBM Spectrum Scale for Linux on IBM Z ? Release 5.0 features and
highlights
https://developer.ibm.com/storage/2018/03/09/ibm-spectrum-scale-linux-ibm-z-release-5-0-features-highlights/
Management GUI enhancements in IBM Spectrum Scale release 5.0.0
https://developer.ibm.com/storage/2018/01/18/gui-enhancements-in-spectrum-scale-release-5-0-0/
IBM Spectrum Scale 5.0.0 ? What?s new in NFS?
https://developer.ibm.com/storage/2018/01/18/ibm-spectrum-scale-5-0-0-whats-new-nfs/
Benefits and implementation of Spectrum Scale sudo wrappers
https://developer.ibm.com/storage/2018/01/15/benefits-implementation-spectrum-scale-sudo-wrappers/
IBM Spectrum Scale: Big Data and Analytics Solution Brief
https://developer.ibm.com/storage/2018/01/15/ibm-spectrum-scale-big-data-analytics-solution-brief/
Variant Sub-blocks in Spectrum Scale 5.0
https://developer.ibm.com/storage/2018/01/11/spectrum-scale-variant-sub-blocks/
Compression support in Spectrum Scale 5.0.0
https://developer.ibm.com/storage/2018/01/11/compression-support-spectrum-scale-5-0-0/
IBM Spectrum Scale Versus Apache Hadoop HDFS
https://developer.ibm.com/storage/2018/01/10/spectrumscale_vs_hdfs/
ESS Fault Tolerance
https://developer.ibm.com/storage/2018/01/09/ess-fault-tolerance/
Genomic Workloads ? How To Get it Right From Infrastructure Point Of View.
https://developer.ibm.com/storage/2018/01/06/genomic-workloads-get-right-infrastructure-point-view/
IBM Spectrum Scale On AWS Cloud : This video explains how to deploy IBM
Spectrum Scale on AWS. This solution helps the users who require highly
available access to a shared name space across multiple instances with
good performance, without requiring an in-depth knowledge of IBM Spectrum
Scale.
Detailed Demo : https://www.youtube.com/watch?v=6j5Xj_d0bh4
Brief Demo : https://www.youtube.com/watch?v=-aMQKPW_RfY.
For more : Search /browse here: https://developer.ibm.com/storage/blog
Consolidation list:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/White%20Papers%20%26%20Media
From: Sandeep Ramesh/India/IBM
To: gpfsug-discuss at spectrumscale.org
Cc: Doris Conti/Poughkeepsie/IBM at IBMUS
Date: 01/10/2018 12:13 PM
Subject: Re: Latest Technical Blogs on Spectrum Scale
Dear User Group Members,
Here are list of development blogs in the last quarter. Passing it to this
email group as Doris had got a feedback in the UG meetings to notify the
members with the latest updates periodically.
Genomic Workloads ? How To Get it Right From Infrastructure Point Of View.
https://developer.ibm.com/storage/2018/01/06/genomic-workloads-get-right-infrastructure-point-view/
IBM Spectrum Scale Versus Apache Hadoop HDFS
https://developer.ibm.com/storage/2018/01/10/spectrumscale_vs_hdfs/
ESS Fault Tolerance
https://developer.ibm.com/storage/2018/01/09/ess-fault-tolerance/
IBM Spectrum Scale MMFSCK ? Savvy Enhancements
https://developer.ibm.com/storage/2018/01/05/ibm-spectrum-scale-mmfsck-savvy-enhancements/
ESS Disk Management
https://developer.ibm.com/storage/2018/01/02/ess-disk-management/
IBM Spectrum Scale Object Protocol On Ubuntu
https://developer.ibm.com/storage/2018/01/01/ibm-spectrum-scale-object-protocol-ubuntu/
IBM Spectrum Scale 5.0 ? Whats new in Unified File and Object
https://developer.ibm.com/storage/2017/12/20/ibm-spectrum-scale-5-0-whats-new-object/
A Complete Guide to ? Protocol Problem Determination Guide for IBM
Spectrum Scale? ? Part 1
https://developer.ibm.com/storage/2017/12/19/complete-guide-protocol-problem-determination-guide-ibm-spectrum-scale-1/
IBM Spectrum Scale installation toolkit ? enhancements over releases
https://developer.ibm.com/storage/2017/12/15/ibm-spectrum-scale-installation-toolkit-enhancements-releases/
Network requirements in an Elastic Storage Server Setup
https://developer.ibm.com/storage/2017/12/13/network-requirements-in-an-elastic-storage-server-setup/
Co-resident migration with Transparent cloud tierin
https://developer.ibm.com/storage/2017/12/05/co-resident-migration-transparent-cloud-tierin/
IBM Spectrum Scale on Hortonworks HDP Hadoop clusters : A Complete Big
Data Solution
https://developer.ibm.com/storage/2017/12/05/ibm-spectrum-scale-hortonworks-hdp-hadoop-clusters-complete-big-data-solution/
Big data analytics with Spectrum Scale using remote cluster mount &
multi-filesystem support
https://developer.ibm.com/storage/2017/11/28/big-data-analytics-spectrum-scale-using-remote-cluster-mount-multi-filesystem-support/
IBM Spectrum Scale HDFS Transparency Short Circuit Write Support
https://developer.ibm.com/storage/2017/11/28/ibm-spectrum-scale-hdfs-transparency-short-circuit-write-support/
IBM Spectrum Scale HDFS Transparency Federation Support
https://developer.ibm.com/storage/2017/11/27/ibm-spectrum-scale-hdfs-transparency-federation-support/
How to configure and performance tuning different system workloads on IBM
Spectrum Scale Sharing Nothing Cluster
https://developer.ibm.com/storage/2017/11/27/configure-performance-tuning-different-system-workloads-ibm-spectrum-scale-sharing-nothing-cluster/
How to configure and performance tuning Spark workloads on IBM Spectrum
Scale Sharing Nothing Cluster
https://developer.ibm.com/storage/2017/11/27/configure-performance-tuning-spark-workloads-ibm-spectrum-scale-sharing-nothing-cluster/
How to configure and performance tuning database workloads on IBM Spectrum
Scale Sharing Nothing Cluster
https://developer.ibm.com/storage/2017/11/27/configure-performance-tuning-database-workloads-ibm-spectrum-scale-sharing-nothing-cluster/
How to configure and performance tuning Hadoop workloads on IBM Spectrum
Scale Sharing Nothing Cluster
https://developer.ibm.com/storage/2017/11/24/configure-performance-tuning-hadoop-workloads-ibm-spectrum-scale-sharing-nothing-cluster/
IBM Spectrum Scale Sharing Nothing Cluster Performance Tuning
https://developer.ibm.com/storage/2017/11/24/ibm-spectrum-scale-sharing-nothing-cluster-performance-tuning/
How to Configure IBM Spectrum Scale? with NIS based Authentication.
https://developer.ibm.com/storage/2017/11/21/configure-ibm-spectrum-scale-nis-based-authentication/
For more : Search /browse here: https://developer.ibm.com/storage/blog
Consolidation list:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/White%20Papers%20%26%20Media
From: Sandeep Ramesh/India/IBM
To: gpfsug-discuss at spectrumscale.org
Cc: Doris Conti/Poughkeepsie/IBM at IBMUS
Date: 11/16/2017 08:15 PM
Subject: Latest Technical Blogs on Spectrum Scale
Dear User Group members,
Here are the Development Blogs in last 3 months on Spectrum Scale
Technical Topics.
Spectrum Scale Monitoring ? Know More ?
https://developer.ibm.com/storage/2017/11/16/spectrum-scale-monitoring-know/
IBM Spectrum Scale 5.0 Release ? What?s coming !
https://developer.ibm.com/storage/2017/11/14/ibm-spectrum-scale-5-0-release-whats-coming/
Four Essentials things to know for managing data ACLs on IBM Spectrum
Scale? from Windows
https://developer.ibm.com/storage/2017/11/13/four-essentials-things-know-managing-data-acls-ibm-spectrum-scale-windows/
GSSUTILS: A new way of running SSR, Deploying or Upgrading ESS Server
https://developer.ibm.com/storage/2017/11/13/gssutils/
IBM Spectrum Scale Object Authentication
https://developer.ibm.com/storage/2017/11/02/spectrum-scale-object-authentication/
Video Surveillance ? Choosing the right storage
https://developer.ibm.com/storage/2017/11/02/video-surveillance-choosing-right-storage/
IBM Spectrum scale object deep dive training with problem determination
https://www.slideshare.net/SmitaRaut/ibm-spectrum-scale-object-deep-dive-training
Spectrum Scale as preferred software defined storage for Ubuntu OpenStack
https://developer.ibm.com/storage/2017/09/29/spectrum-scale-preferred-software-defined-storage-ubuntu-openstack/
IBM Elastic Storage Server 2U24 Storage ? an All-Flash offering, a
performance workhorse
https://developer.ibm.com/storage/2017/10/06/ess-5-2-flash-storage/
A Complete Guide to Configure LDAP-based authentication with IBM Spectrum
Scale? for File Access
https://developer.ibm.com/storage/2017/09/21/complete-guide-configure-ldap-based-authentication-ibm-spectrum-scale-file-access/
Deploying IBM Spectrum Scale on AWS Quick Start
https://developer.ibm.com/storage/2017/09/18/deploy-ibm-spectrum-scale-on-aws-quick-start/
Monitoring Spectrum Scale Object metrics
https://developer.ibm.com/storage/2017/09/14/monitoring-spectrum-scale-object-metrics/
Tier your data with ease to Spectrum Scale Private Cloud(s) using Moonwalk
Universal
https://developer.ibm.com/storage/2017/09/14/tier-data-ease-spectrum-scale-private-clouds-using-moonwalk-universal/
Why do I see owner as ?Nobody? for my export mounted using NFSV4 Protocol
on IBM Spectrum Scale??
https://developer.ibm.com/storage/2017/09/08/see-owner-nobody-export-mounted-using-nfsv4-protocol-ibm-spectrum-scale/
IBM Spectrum Scale? Authentication using Active Directory and LDAP
https://developer.ibm.com/storage/2017/09/01/ibm-spectrum-scale-authentication-using-active-directory-ldap/
IBM Spectrum Scale? Authentication using Active Directory and RFC2307
https://developer.ibm.com/storage/2017/09/01/ibm-spectrum-scale-authentication-using-active-directory-rfc2307/
High Availability Implementation with IBM Spectrum Virtualize and IBM
Spectrum Scale
https://developer.ibm.com/storage/2017/08/30/high-availability-implementation-ibm-spectrum-virtualize-ibm-spectrum-scale/
10 Frequently asked Questions on configuring Authentication using AD +
AUTO ID mapping on IBM Spectrum Scale?.
https://developer.ibm.com/storage/2017/08/04/10-frequently-asked-questions-configuring-authentication-using-ad-auto-id-mapping-ibm-spectrum-scale/
IBM Spectrum Scale? Authentication using Active Directory
https://developer.ibm.com/storage/2017/07/30/ibm-spectrum-scale-auth-using-active-directory/
Five cool things that you didn?t know Transparent Cloud Tiering on
Spectrum Scale can do
https://developer.ibm.com/storage/2017/07/29/five-cool-things-didnt-know-transparent-cloud-tiering-spectrum-scale-can/
IBM Spectrum Scale GUI videos
https://developer.ibm.com/storage/2017/07/25/ibm-spectrum-scale-gui-videos/
IBM Spectrum Scale? Authentication ? Planning for NFS Access
https://developer.ibm.com/storage/2017/07/24/ibm-spectrum-scale-planning-nfs-access/
For more : Search /browse here: https://developer.ibm.com/storage/blog
Consolidation list:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/White%20Papers%20%26%20Media
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jonathan.buzzard at strath.ac.uk Mon Aug 17 10:35:41 2020
From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard)
Date: Mon, 17 Aug 2020 10:35:41 +0100
Subject: [gpfsug-discuss] mmbackup
In-Reply-To: <5E58061E-DB6B-4526-8371-ADCE8E095A6E@bham.ac.uk>
References: <87a8ec2b-4657-4b72-ac6f-385b40f6b6a7@strath.ac.uk>
<5E58061E-DB6B-4526-8371-ADCE8E095A6E@bham.ac.uk>
Message-ID: <3e91e704-2a45-06f0-facd-49f90009b7d1@strath.ac.uk>
On 15/08/2020 19:24, Simon Thompson wrote:
>
> When you "web portal" it's not clear if you refer to fix central or
> the commercial.lenovo.com site, the client binaries are a separate
> set of downloads to the DSS-G bundle for the servers, from where you
> should be able to download 5.0.5.1 (at least I can see that there).
> Provided your DSS-G is under entitlement, my understanding is that
> you are entitled to download the client bundle supplied by Lenovo.
>
I was indeed referring to the commercial.lenovo.com site.
So yes there seems to be separate client binaries for download, however
the DSS-G bundle's for the servers also includes the client binaries too.
It is however as clear as a thick gloppy mud what you are entitled to use.
The backup node is a genuine RHEL7 machine, so it was trivial to pin it
to 7.7 and upgrade using the 5.0.4-3 RPM's that came in the 2.6b bundle.
This has at least got me out the hole of mmbackup no longer working and
having to resort to a "dsmc incr"
However I reached an executive decision over the weekend of "sod it" I
am upgrading to the full 7.8 latest with 5.0.5.1 GPFS client today.
JAB.
--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
From jroche at lenovo.com Mon Aug 17 10:53:43 2020
From: jroche at lenovo.com (Jim Roche)
Date: Mon, 17 Aug 2020 09:53:43 +0000
Subject: [gpfsug-discuss] [External] Re: mmbackup
In-Reply-To: <3e91e704-2a45-06f0-facd-49f90009b7d1@strath.ac.uk>
References: <87a8ec2b-4657-4b72-ac6f-385b40f6b6a7@strath.ac.uk>
<5E58061E-DB6B-4526-8371-ADCE8E095A6E@bham.ac.uk>
<3e91e704-2a45-06f0-facd-49f90009b7d1@strath.ac.uk>
Message-ID:
Simon is correct from that point of view. If you can see it on your commercial.lenovo.com site then you are able to use it within your licensing rules and that you are still compatible from a Spectrum Scale point of view. The DSS-G specific tarballs (2.6a, 2.6b, 3.0, etc...) should be used exactly as is for deploying the DSS-G, but then normal Spectrum scale client/server compatibility follows. The tarball will define what MUST run on the DSS-G NSDs, but after that you are free to download/use whichever client version you are entitled to -- which should correspond to what is visible on the website to download.
Jim
Jim Roche
Head of Research Computing
University Relations Manager
Redwood, 3 Chineham Business Park, Crockford Lane Basingstoke Hampshire RG24 8WQ
Lenovo UK
+44 7702678579
jroche at lenovo.com
?
Lenovo.com?
Twitter?|?Instagram?|?Facebook?|?Linkedin?|?YouTube?|?Privacy?
-----Original Message-----
From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Jonathan Buzzard
Sent: 17 August 2020 10:36
To: gpfsug-discuss at spectrumscale.org
Subject: [External] Re: [gpfsug-discuss] mmbackup
On 15/08/2020 19:24, Simon Thompson wrote:
>
> When you "web portal" it's not clear if you refer to fix central or
> the commercial.lenovo.com site, the client binaries are a separate set
> of downloads to the DSS-G bundle for the servers, from where you
> should be able to download 5.0.5.1 (at least I can see that there).
> Provided your DSS-G is under entitlement, my understanding is that you
> are entitled to download the client bundle supplied by Lenovo.
>
I was indeed referring to the commercial.lenovo.com site.
So yes there seems to be separate client binaries for download, however the DSS-G bundle's for the servers also includes the client binaries too.
It is however as clear as a thick gloppy mud what you are entitled to use.
The backup node is a genuine RHEL7 machine, so it was trivial to pin it to 7.7 and upgrade using the 5.0.4-3 RPM's that came in the 2.6b bundle.
This has at least got me out the hole of mmbackup no longer working and having to resort to a "dsmc incr"
However I reached an executive decision over the weekend of "sod it" I am upgrading to the full 7.8 latest with 5.0.5.1 GPFS client today.
JAB.
--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
From jonathan.buzzard at strath.ac.uk Mon Aug 17 11:33:16 2020
From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard)
Date: Mon, 17 Aug 2020 11:33:16 +0100
Subject: [gpfsug-discuss] [External] Re: mmbackup
In-Reply-To:
References: <87a8ec2b-4657-4b72-ac6f-385b40f6b6a7@strath.ac.uk>
<5E58061E-DB6B-4526-8371-ADCE8E095A6E@bham.ac.uk>
<3e91e704-2a45-06f0-facd-49f90009b7d1@strath.ac.uk>
Message-ID: <1e42dc21-87dc-0eef-896f-eaa4747f81b9@strath.ac.uk>
On 17/08/2020 10:53, Jim Roche wrote:
> Simon is correct from that point of view. If you can see it on your
> commercial.lenovo.com site then you are able to use it within your
> licensing rules and that you are still compatible from a Spectrum
> Scale point of view. The DSS-G specific tarballs (2.6a, 2.6b, 3.0,
> etc...) should be used exactly as is for deploying the DSS-G, but
> then normal Spectrum scale client/server compatibility follows. The
> tarball will define what MUST run on the DSS-G NSDs, but after that
> you are free to download/use whichever client version you are
> entitled to -- which should correspond to what is visible on the
> website to download.
>
That's what I guessed, but nowhere does it ever state that. Thanks for
confirming it. Email now filed away for a cover my back in the event of
any license audit :-)
On a related note someone at IBM needs to update the extractor thing for
the data access version so that it works properly on a HiDPI display. My
understanding is that contract terms are unreadable are not valid in the
UK and on my Surface Book (running Linux of course) the text is not
readable :-)
JAB.
--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
From juergen.hannappel at desy.de Tue Aug 18 13:08:59 2020
From: juergen.hannappel at desy.de (Hannappel, Juergen)
Date: Tue, 18 Aug 2020 14:08:59 +0200 (CEST)
Subject: [gpfsug-discuss] Tiny cluster quorum problem
Message-ID: <1498272420.116956.1597752539022.JavaMail.zimbra@desy.de>
Hi,
on a tiny GPFS cluster with just two nodes one node died (really dead, cannot be switched on any more), and now I cannot remove it from the cluster anymore.
[root at exflonc42 ~]# mmdelnode -N exflonc41
mmdelnode: Unable to obtain the GPFS configuration file lock.
mmdelnode: GPFS was unable to obtain a lock from node exflonc41.desy.de.
mmdelnode: Command failed. Examine previous error messages to determine cause.
[root at exflonc42 ~]# mmlscluster
get file failed: Not enough CCR quorum nodes available (err 809)
gpfsClusterInit: Unexpected error from ccr fget mmsdrfs. Return code: 158
mmlscluster: Command failed. Examine previous error messages to determine cause.
Is there any chance to get this cluster up and running again or should I wipe it and create a new one from the remaining node?
There are no data on this cluster, it's a remote cluster to a storage cluster and has only compute clients....
--
Dr. J?rgen Hannappel DESY/IT Tel. : +49 40 8998-4616
From janfrode at tanso.net Tue Aug 18 14:45:33 2020
From: janfrode at tanso.net (Jan-Frode Myklebust)
Date: Tue, 18 Aug 2020 15:45:33 +0200
Subject: [gpfsug-discuss] Tiny cluster quorum problem
In-Reply-To: <1498272420.116956.1597752539022.JavaMail.zimbra@desy.de>
References: <1498272420.116956.1597752539022.JavaMail.zimbra@desy.de>
Message-ID:
I would expect you should be able to get it back up using the routine at
https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.5/com.ibm.spectrum.scale.v5r05.doc/bl1adv_failsynch.htm
Maybe you just need to force remove quorum-role from the dead node ?
-jf
On Tue, Aug 18, 2020 at 2:16 PM Hannappel, Juergen <
juergen.hannappel at desy.de> wrote:
> Hi,
> on a tiny GPFS cluster with just two nodes one node died (really dead,
> cannot be switched on any more), and now I cannot remove it from the
> cluster anymore.
> [root at exflonc42 ~]# mmdelnode -N exflonc41
> mmdelnode: Unable to obtain the GPFS configuration file lock.
> mmdelnode: GPFS was unable to obtain a lock from node exflonc41.desy.de.
> mmdelnode: Command failed. Examine previous error messages to determine
> cause.
>
> [root at exflonc42 ~]# mmlscluster
> get file failed: Not enough CCR quorum nodes available (err 809)
> gpfsClusterInit: Unexpected error from ccr fget mmsdrfs. Return code: 158
> mmlscluster: Command failed. Examine previous error messages to determine
> cause.
>
> Is there any chance to get this cluster up and running again or should I
> wipe it and create a new one from the remaining node?
> There are no data on this cluster, it's a remote cluster to a storage
> cluster and has only compute clients....
> --
> Dr. J?rgen Hannappel DESY/IT Tel. : +49 40 8998-4616
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From juergen.hannappel at desy.de Tue Aug 18 14:54:48 2020
From: juergen.hannappel at desy.de (Hannappel, Juergen)
Date: Tue, 18 Aug 2020 15:54:48 +0200 (CEST)
Subject: [gpfsug-discuss] Tiny cluster quorum problem
In-Reply-To:
References: <1498272420.116956.1597752539022.JavaMail.zimbra@desy.de>
Message-ID: <2136332901.213828.1597758888992.JavaMail.zimbra@desy.de>
Thanks!
That helped. With the --force I could change roles, expell the node and have the "cluster" now up on the remaining node.
> From: "Jan-Frode Myklebust"
> To: "gpfsug main discussion list"
> Sent: Tuesday, 18 August, 2020 15:45:33
> Subject: Re: [gpfsug-discuss] Tiny cluster quorum problem
> I would expect you should be able to get it back up using the routine at [
> https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.5/com.ibm.spectrum.scale.v5r05.doc/bl1adv_failsynch.htm
> |
> https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.5/com.ibm.spectrum.scale.v5r05.doc/bl1adv_failsynch.htm
> ]
> Maybe you just need to force remove quorum-role from the dead node ?
> -jf
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From juergen.hannappel at desy.de Tue Aug 18 15:04:56 2020
From: juergen.hannappel at desy.de (Hannappel, Juergen)
Date: Tue, 18 Aug 2020 16:04:56 +0200 (CEST)
Subject: [gpfsug-discuss] Tiny cluster quorum problem
In-Reply-To: <2136332901.213828.1597758888992.JavaMail.zimbra@desy.de>
References: <1498272420.116956.1597752539022.JavaMail.zimbra@desy.de>
<2136332901.213828.1597758888992.JavaMail.zimbra@desy.de>
Message-ID: <2001778389.221396.1597759496653.JavaMail.zimbra@desy.de>
... just for the record:
man mmchnode | grep force | wc -l
0
In the man page the --force option is not mentioned at all.
The same is true for mmdelnode:
man mmdelnode | grep force | wc -l
0
But there the error output gives a hint that it's there:
mmdelnode: If the affected nodes are permanently down, they can be deleted with the --force option.
> From: "Juergen Hannappel"
> To: "gpfsug main discussion list"
> Sent: Tuesday, 18 August, 2020 15:54:48
> Subject: Re: [gpfsug-discuss] Tiny cluster quorum problem
> Thanks!
> That helped. With the --force I could change roles, expell the node and have the
> "cluster" now up on the remaining node.
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From chair at spectrumscale.org Wed Aug 19 14:49:41 2020
From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale User Group Chair))
Date: Wed, 19 Aug 2020 14:49:41 +0100
Subject: [gpfsug-discuss] SSUG::Digital Update on File Create and MMAP
performance
Message-ID: <>
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: meeting.ics
Type: text/calendar
Size: 2310 bytes
Desc: not available
URL:
From heinrich.billich at id.ethz.ch Wed Aug 19 15:14:22 2020
From: heinrich.billich at id.ethz.ch (Billich Heinrich Rainer (ID SD))
Date: Wed, 19 Aug 2020 14:14:22 +0000
Subject: [gpfsug-discuss] Tune OS for Mellanox IB/ETH HCA on Power hardware
- should I run mlnx_affinity or mlnx_tune or sysctl tuning?
Message-ID:
Hello,
We run Spectrum Scale on Power hardware - le and be - and Mellanox IB and VPI cards. We did not enable any automatic tuning at system start by the usual Mellanox scripts.
/etc/infiniband/openib.conf contains
# Run /usr/sbin/mlnx_affinity
RUN_AFFINITY_TUNER=no
# Run /usr/sbin/mlnx_tune
RUN_MLNX_TUNE=no
# Run sysctl performance tuning script
RUN_SYSCTL=no
I wonder if we should enable this scripts?
Are they of no use on ppc64 and ppc64le or does some other script run them?
They aren't enabled on ESS systems, either. /proc/interrupts shows that the interrupts likely are attached and distributed to all cores on the closest cpu. Which is good.
I did google to find advise on Mellanox tuning for power hardware but found none. I would like to get a bit more insight in this topic.
We run RHEL 7.6 and 7.7 , Scale 5.0.4 and 5.0.5 and ESS 5.3.6.1. We do the usual sysctl based tuning.
ESS 5.3.6.1 includes a new script /xcatpost/mlnx_params.sh which adds some tuning of PCI and Mellanox FW settings :
Description: Check and update PCI Express Read Request Size to 4KB and INT_LOG_MAX_PAYLOAD_SIZE to 12
So at least some tuning is done.
Cheers,
Heiner
--
=======================
Heinrich Billich
ETH Z?rich
Informatikdienste
Tel.: +41 44 632 72 56
heinrich.billich at id.ethz.ch
========================
From heinrich.billich at id.ethz.ch Tue Aug 25 16:13:03 2020
From: heinrich.billich at id.ethz.ch (Billich Heinrich Rainer (ID SD))
Date: Tue, 25 Aug 2020 15:13:03 +0000
Subject: [gpfsug-discuss] AFM cache rolling upgrade with minimal impact / no
directory scan
Message-ID: <4D036E55-4E70-4BE0-A2AF-2BD8B0C54184@id.ethz.ch>
Hello,
We will upgrade a pair of AFM cache nodes which serve about 40 SW filesets. I want to do a rolling upgrade. I wonder if I can minimize the impact of the failover when filesets move to the other afm node. I can't stop replication during the upgrade: The update will take too long (OS, mofed, FW, scale) and we want to preserve the ability to recall files (?). Mostly I want to avoid policy scans of all inodes on cache (and maybe even lookups of files on home??)
I can stop replication for a short time. Also the queues most of the time are empty or contain just a few 100 entries. The cache filesets holds about 500M used inodes. Does a specific procedure exist, or is it good enough to just shutdown scale on the node I want to update? And maybe flush the queues first as far as possible?
If a fileset has a zero length queue of pending transactions to home, will this avoid any policy scan when a second afm node takes responsibility for the fileset?
Maybe I did already ask this before. Unfortunately the manual isn't as explicit as I would prefer when it talks about rolling upgrades.
Thank you,
Heiner
--
=======================
Heinrich Billich
ETH Z?rich
Informatikdienste
Tel.: +41 44 632 72 56
heinrich.billich at id.ethz.ch
========================
From vpuvvada at in.ibm.com Wed Aug 26 04:04:09 2020
From: vpuvvada at in.ibm.com (Venkateswara R Puvvada)
Date: Wed, 26 Aug 2020 08:34:09 +0530
Subject: [gpfsug-discuss]
=?utf-8?q?AFM_cache_rolling_upgrade_with_minimal?=
=?utf-8?q?_impact_/_no=09directory_scan?=
In-Reply-To: <4D036E55-4E70-4BE0-A2AF-2BD8B0C54184@id.ethz.ch>
References: <4D036E55-4E70-4BE0-A2AF-2BD8B0C54184@id.ethz.ch>
Message-ID:
Billich,
>The cache filesets holds about 500M used inodes. Does a specific
procedure exist, or is it good enough to just shutdown scale on the node I
want to update? And maybe flush >the queues first as far as possible?
It is recommended to stop (mmafmctl device stop) the filesets and perform
upgrade if the upgrade duration is short. But if the upgrade procedure
takes too long, gateway node can be shutdown, other active gateway
node(s) runs recovery automatically for the filesets owned by the gateway
which was shutdown.
>If a fileset has a zero length queue of pending transactions to home,
will this avoid any policy scan when a second afm node takes
responsibility for the fileset?
Active gateway node(s) always runs recovery with policy scan even though
queue length was zero on other gateway node(s), so it is possible that
recovery on multiple filesets (assuming that in this case 20 filesets)
trigger at the same time and which may impact the system performance. You
could limit the number of parallel recoveries using the
afmMaxParallelRecoveries option. For example set mmchconfig
afmMaxParallelRecoveries=5 -i (default 0 means run recovery on all
filesets parallelly), and reset to default later.
~Venkat (vpuvvada at in.ibm.com)
From: "Billich Heinrich Rainer (ID SD)"
To: gpfsug main discussion list
Date: 08/25/2020 08:43 PM
Subject: [EXTERNAL] [gpfsug-discuss] AFM cache rolling upgrade with
minimal impact / no directory scan
Sent by: gpfsug-discuss-bounces at spectrumscale.org
Hello,
We will upgrade a pair of AFM cache nodes which serve about 40 SW
filesets. I want to do a rolling upgrade. I wonder if I can minimize the
impact of the failover when filesets move to the other afm node. I can't
stop replication during the upgrade: The update will take too long (OS,
mofed, FW, scale) and we want to preserve the ability to recall files (?).
Mostly I want to avoid policy scans of all inodes on cache (and maybe
even lookups of files on home??)
I can stop replication for a short time. Also the queues most of the time
are empty or contain just a few 100 entries. The cache filesets holds
about 500M used inodes. Does a specific procedure exist, or is it good
enough to just shutdown scale on the node I want to update? And maybe
flush the queues first as far as possible?
If a fileset has a zero length queue of pending transactions to home, will
this avoid any policy scan when a second afm node takes responsibility for
the fileset?
Maybe I did already ask this before. Unfortunately the manual isn't as
explicit as I would prefer when it talks about rolling upgrades.
Thank you,
Heiner
--
=======================
Heinrich Billich
ETH Z?rich
Informatikdienste
Tel.: +41 44 632 72 56
heinrich.billich at id.ethz.ch
========================
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From juergen.hannappel at desy.de Wed Aug 26 15:25:01 2020
From: juergen.hannappel at desy.de (Hannappel, Juergen)
Date: Wed, 26 Aug 2020 16:25:01 +0200 (CEST)
Subject: [gpfsug-discuss] Question about Security Bulletin: Openstack
Keystone vulnerabilities affects IBM Spectrum Scale (CVE-2020-12689)
Message-ID: <2087204873.615428.1598451901451.JavaMail.zimbra@desy.de>
Hello,
in the bulletin https://www.ibm.com/support/pages/node/6323241 it's mentioned
"IBM Spectrum Scale, shipped with Openstack keystone, is exposed to vulnerabilities as detailed below."
I am not aware of any openstack components in our standard Scale deployments,
so how am I to read this sentence? Is there some Openstack stuff bundled into a standard gpfs installation?
--
Dr. J?rgen Hannappel DESY/IT Tel. : +49 40 8998-4616
From jtolson at us.ibm.com Wed Aug 26 16:52:56 2020
From: jtolson at us.ibm.com (John T Olson)
Date: Wed, 26 Aug 2020 08:52:56 -0700
Subject: [gpfsug-discuss] Question about Security Bulletin: Openstack
Keystone vulnerabilities affects IBM Spectrum Scale (CVE-2020-12689)
In-Reply-To: <2087204873.615428.1598451901451.JavaMail.zimbra@desy.de>
References: <2087204873.615428.1598451901451.JavaMail.zimbra@desy.de>
Message-ID:
Hi, openstack Keystone is only used if you have configured and are using
the object services. If you are not using object services, then the local
Keystone server will not be configured and this vulnerability should not
affect you. Do you have object services enabled?
Thanks,
John
John T. Olson, Ph.D.
Spectrum Scale Security
Master Inventor
957/9032-1 Tucson, AZ, 85744
(520) 799-5185, tie 321-5185 (FAX: 520-799-4237)
Email: jtolson at us.ibm.com
LinkedIn: www.linkedin.com/in/john-t-olson
Follow me on twitter: @John_T_Olson
From: "Hannappel, Juergen"
To: gpfsug main discussion list
Date: 08/26/2020 07:25 AM
Subject: [EXTERNAL] [gpfsug-discuss] Question about Security Bulletin:
Openstack Keystone vulnerabilities affects IBM Spectrum Scale
(CVE-2020-12689)
Sent by: gpfsug-discuss-bounces at spectrumscale.org
Hello,
in the bulletin https://www.ibm.com/support/pages/node/6323241 it's
mentioned
"IBM Spectrum Scale, shipped with Openstack keystone, is exposed to
vulnerabilities as detailed below."
I am not aware of any openstack components in our standard Scale
deployments,
so how am I to read this sentence? Is there some Openstack stuff bundled
into a standard gpfs installation?
--
Dr. J?rgen Hannappel DESY/IT Tel. : +49 40 8998-4616
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL:
From juergen.hannappel at desy.de Thu Aug 27 08:20:10 2020
From: juergen.hannappel at desy.de (Hannappel, Juergen)
Date: Thu, 27 Aug 2020 09:20:10 +0200 (CEST)
Subject: [gpfsug-discuss] Question about Security Bulletin: Openstack
Keystone vulnerabilities affects IBM Spectrum Scale (CVE-2020-12689)
In-Reply-To:
References: <2087204873.615428.1598451901451.JavaMail.zimbra@desy.de>
Message-ID: <196632098.876889.1598512810336.JavaMail.zimbra@desy.de>
Hin
no, we don't use object services. Maybe the object services condition should be mentioned in the bulletin.
Thanks,
Juergen
> From: "John T Olson"
> To: "gpfsug main discussion list"
> Sent: Wednesday, 26 August, 2020 17:52:56
> Subject: Re: [gpfsug-discuss] Question about Security Bulletin: Openstack
> Keystone vulnerabilities affects IBM Spectrum Scale (CVE-2020-12689)
> Hi, openstack Keystone is only used if you have configured and are using the
> object services. If you are not using object services, then the local Keystone
> server will not be configured and this vulnerability should not affect you. Do
> you have object services enabled?
> Thanks,
> John
> John T. Olson, Ph.D.
> Spectrum Scale Security
> Master Inventor
> 957/9032-1 Tucson, AZ, 85744
> (520) 799-5185, tie 321-5185 (FAX: 520-799-4237)
> Email: jtolson at us.ibm.com
> LinkedIn: www.linkedin.com/in/john-t-olson
> Follow me on twitter: @John_T_Olson
> "Hannappel, Juergen" ---08/26/2020 07:25:12 AM---Hello, in the bulletin [
> https://www.ibm.com/support/pages/node/6323241 |
> https://www.ibm.com/support/pages/node/6323241 ] it's mentioned
> From: "Hannappel, Juergen"
> To: gpfsug main discussion list
> Date: 08/26/2020 07:25 AM
> Subject: [EXTERNAL] [gpfsug-discuss] Question about Security Bulletin: Openstack
> Keystone vulnerabilities affects IBM Spectrum Scale (CVE-2020-12689)
> Sent by: gpfsug-discuss-bounces at spectrumscale.org
> Hello,
> in the bulletin [ https://www.ibm.com/support/pages/node/6323241 |
> https://www.ibm.com/support/pages/node/6323241 ] it's mentioned
> "IBM Spectrum Scale, shipped with Openstack keystone, is exposed to
> vulnerabilities as detailed below."
> I am not aware of any openstack components in our standard Scale deployments,
> so how am I to read this sentence? Is there some Openstack stuff bundled into a
> standard gpfs installation?
> --
> Dr. J?rgen Hannappel DESY/IT Tel. : +49 40 8998-4616
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> [ http://gpfsug.org/mailman/listinfo/gpfsug-discuss |
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss ]
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL:
From Philipp.Rehs at uni-duesseldorf.de Fri Aug 28 10:43:47 2020
From: Philipp.Rehs at uni-duesseldorf.de (Philipp Helo Rehs)
Date: Fri, 28 Aug 2020 11:43:47 +0200
Subject: [gpfsug-discuss] tsgskkm stuck
Message-ID: <90e8ffba-a00a-95b9-c65b-1cda9ffc8c4c@uni-duesseldorf.de>
Hello,
we have a gpfs v4 cluster running with 4 nsds and i am trying to add
some clients:
mmaddnode -N hpc-storage-1-ib:client:hpc-storage-1
this commands hangs and do not finish
When i look into the server, i can see the following processes which
never finish:
root???? 38138? 0.0? 0.0 123048 10376 ???????? Ss?? 11:32?? 0:00
/usr/lpp/mmfs/bin/mmksh /usr/lpp/mmfs/bin/mmremote checkNewClusterNode3
lc/setupClient
%%9999%%:00_VERSION_LINE::1709:3:1::lc:gpfs3.hilbert.hpc.uni-duesseldorf.de::0:/bin/ssh:/bin/scp:5362040003754711198:lc2:1597757602::HPCStorage.hilbert.hpc.uni-duesseldorf.de:2:1:1:2:A:::central:0.0:
%%home%%:20_MEMBER_NODE::5:20:hpc-storage-1
root???? 38169? 0.0? 0.0 123564 10892 ???????? S??? 11:32?? 0:00
/usr/lpp/mmfs/bin/mmksh /usr/lpp/mmfs/bin/mmremote ccrctl setupClient 2
21479
1=gpfs3-ib.hilbert.hpc.uni-duesseldorf.de:1191,2=gpfs4-ib.hilbert.hpc.uni-duesseldorf.de:1191,4=gpfs6-ib.hilbert.hpc.uni-duesseldorf.de:1191,3=gpfs5-ib.hilbert.hpc.uni-duesseldorf.de:1191
0 1191
root???? 38212? 100? 0.0? 35544? 5752 ???????? R??? 11:32?? 9:40
/usr/lpp/mmfs/bin/tsgskkm store --cert
/var/mmfs/ssl/stage/tmpKeyData.mmremote.38169.cert --priv
/var/mmfs/ssl/stage/tmpKeyData.mmremote.38169.priv --out
/var/mmfs/ssl/stage/tmpKeyData.mmremote.38169.keystore --fips off
The node is an AMD epyc.
Any idea what could cause the issue?
ssh is possible in both directions and firewall is disabled.
Kind regards
?Philipp Rehs
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5523 bytes
Desc: S/MIME Cryptographic Signature
URL:
From stockf at us.ibm.com Fri Aug 28 13:25:18 2020
From: stockf at us.ibm.com (Frederick Stock)
Date: Fri, 28 Aug 2020 12:25:18 +0000
Subject: [gpfsug-discuss] tsgskkm stuck
In-Reply-To: <90e8ffba-a00a-95b9-c65b-1cda9ffc8c4c@uni-duesseldorf.de>
References: <90e8ffba-a00a-95b9-c65b-1cda9ffc8c4c@uni-duesseldorf.de>
Message-ID:
An HTML attachment was scrubbed...
URL:
From olaf.weiser at de.ibm.com Mon Aug 31 06:52:33 2020
From: olaf.weiser at de.ibm.com (Olaf Weiser)
Date: Mon, 31 Aug 2020 05:52:33 +0000
Subject: [gpfsug-discuss] tsgskkm stuck
In-Reply-To: <90e8ffba-a00a-95b9-c65b-1cda9ffc8c4c@uni-duesseldorf.de>
References: <90e8ffba-a00a-95b9-c65b-1cda9ffc8c4c@uni-duesseldorf.de>
Message-ID:
An HTML attachment was scrubbed...
URL: