From christian.vieser at 1und1.de Wed Aug 3 10:30:18 2022 From: christian.vieser at 1und1.de (Christian Vieser) Date: Wed, 3 Aug 2022 11:30:18 +0200 Subject: [gpfsug-discuss] add local nsd back to cluster? In-Reply-To: References: Message-ID: <041326d8-8a90-324a-9e79-ec76ed6b289a@1und1.de> Yes, this works without any issues. After re-installing a node, just issue a "mmsdrrestore -N " from one of the other nodes in the cluster. In case you are working with ssh authentication between the hosts, it is helpful to have the ssh host keys and root user keys in your backup and restore them after re-install, otherwise it's a hassle to distribute new keys on all other cluster nodes (authorized_keys and known_hosts). > > I am planning to implement? a cluster with a bunch of old x86 > machines, the disks are not connected to nodes via the SAN network, > instead each x86 machine has some local attached disks. > The question is regarding node failure, for example only the operating > system disk fails and the nsd disks are good. In that case I plan to > replace the failing OS disk with a new one and install the OS on it > and re-attach these nsd disks to that node, my question is: will this > work? how can I add a nsd back to the cluster without restoring data > from other replicas since the data/metadata is actually not corrupted > on nsd. > > Best regards, > From daniel.kidger at hpe.com Wed Aug 3 16:46:36 2022 From: daniel.kidger at hpe.com (Kidger, Daniel) Date: Wed, 3 Aug 2022 15:46:36 +0000 Subject: [gpfsug-discuss] add local nsd back to cluster? In-Reply-To: <9004117D-3C4D-4A76-931B-1DCB2B631B2F@us.ibm.com> References: <9004117D-3C4D-4A76-931B-1DCB2B631B2F@us.ibm.com> Message-ID: >Starting GPFS 5.1.4, you can use the CCR archive to restore the local node (the node that is issuing the mmsdrrestore command) beside restoring the entire cluster. This is great addition, but how does the security model work? ie. How do the other cluster nodes know that this is a newly re-installed node can be trusted and is not a rogue node trying to gain cluster membership by a backdoor? Daniel From: gpfsug-discuss on behalf of Truong Vu Date: Saturday, 30 July 2022 at 01:35 To: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] add local nsd back to cluster? Starting GPFS 5.1.4, you can use the CCR archive to restore the local node (the node that is issuing the mmsdrrestore command) beside restoring the entire cluster. Prior to GPFS5.1.4, as the error message reviewed, you can only use the CCR archive to restore the entire cluster. GPFS must be down any node that is being restored. If is a good node in the cluster, use the -p option -p NodeName Specifies the node from which to obtain a valid GPFS configuration file. The node must be either the primary configuration server or a node that has a valid backup copy of the mmsdrfs file. If this parameter is not specified, the command uses the configuration file on the node from which the command is issued. Thanks, Tru. ?On 7/29/22, 12:51 PM, "gpfsug-discuss on behalf of gpfsug-discuss-request at gpfsug.org" wrote: Send gpfsug-discuss mailing list submissions to gpfsug-discuss at gpfsug.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at gpfsug.org You can reach the person managing the list at gpfsug-discuss-owner at gpfsug.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Re: add local nsd back to cluster? (shao feng) 2. Re: add local nsd back to cluster? (Stephen Ulmer) ---------------------------------------------------------------------- Message: 1 Date: Fri, 29 Jul 2022 23:54:24 +0800 From: shao feng To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] add local nsd back to cluster? Message-ID: Content-Type: text/plain; charset="utf-8" Thanks Olaf I've setup the mmsdr backup as https://www.ibm.com/docs/en/spectrum-scale/5.1.2?topic=exits-mmsdrbackup-user-exit , since my cluster is CCR enabled, it generate a CCR backup file, but when trying to restore from this file, it require quorum nodes to shutdown? Is it possible to restore without touching quorum nodes? [root at tofail ~]# mmsdrrestore -F CCRBackup.986.2022.07.29.23.06.19.myquorum.tar.gz Restoring a CCR backup archive is a cluster-wide operation. The -a flag is required. mmsdrrestore: Command failed. Examine previous error messages to determine cause. [root at tofail ~]# mmsdrrestore -F CCRBackup.986.2022.07.29.23.06.19.myquorum.tar.gz -a Restoring CCR backup Verifying that GPFS is inactive on quorum nodes mmsdrrestore: GPFS is still active on myquorum mmsdrrestore: Unexpected error from mmsdrrestore: CCR restore failed. Return code: 192 mmsdrrestore: Command failed. Examine previous error messages to determine cause. On Thu, Jul 28, 2022 at 3:14 PM Olaf Weiser wrote: > > > Hi - > assuming, you'll run it withou ECE ?!? ... just with replication on the > file system level > ba aware, every time a node goes offline, you 'll have to restart the > disks in your filesystem .. This causes a complete scan of the meta data to > detect files with missing updates / replication > > > apart from that to your Q : > you may consider to backup mmsdr > additionally, take a look to mmsdrrestore, in case you want to restore a > nodes's SDR configuration > > quick and dirty.. save the content of /var/mmfs may also help you > > during the node is "gone".. of course.. the disk is down , after restore > of SDR / node's config .. it should be able to start .. > the rest runs as usual > > > > ------------------------------ > *Von:* gpfsug-discuss im Auftrag von > shao feng > *Gesendet:* Donnerstag, 28. Juli 2022 09:02 > *An:* gpfsug main discussion list > *Betreff:* [EXTERNAL] [gpfsug-discuss] add local nsd back to cluster? > > Hi all, I am planning to implement a cluster with a bunch of old x86 > machines, the disks are not connected to nodes via the SAN network, instead > each x86 machine has some local attached disks. The question is regarding > node failure, for example > > Hi all, > > I am planning to implement a cluster with a bunch of old x86 machines, > the disks are not connected to nodes via the SAN network, instead each x86 > machine has some local attached disks. > The question is regarding node failure, for example only the operating > system disk fails and the nsd disks are good. In that case I plan to > replace the failing OS disk with a new one and install the OS on it and > re-attach these nsd disks to that node, my question is: will this work? how > can I add a nsd back to the cluster without restoring data from other > replicas since the data/metadata is actually not corrupted on nsd. > > Best regards, > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Fri, 29 Jul 2022 12:48:44 -0400 From: Stephen Ulmer To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] add local nsd back to cluster? Message-ID: <1DEB036E-AA3A-4498-A5B9-B66078EC87A9 at ulmer.org> Content-Type: text/plain; charset="utf-8" If there are cluster nodes up, restore from the running nodes instead of the file. I think it?s -p, but look at the manual page. -- Stephen Ulmer Sent from a mobile device; please excuse auto-correct silliness. > On Jul 29, 2022, at 11:20 AM, shao feng wrote: > > ? > Thanks Olaf > > I've setup the mmsdr backup as https://www.ibm.com/docs/en/spectrum-scale/5.1.2?topic=exits-mmsdrbackup-user-exit , since my cluster is CCR enabled, it generate a CCR backup file, > but when trying to restore from this file, it require quorum nodes to shutdown? Is it possible to restore without touching quorum nodes? > > [root at tofail ~]# mmsdrrestore -F CCRBackup.986.2022.07.29.23.06.19.myquorum.tar.gz > Restoring a CCR backup archive is a cluster-wide operation. > The -a flag is required. > mmsdrrestore: Command failed. Examine previous error messages to determine cause. > > [root at tofail ~]# mmsdrrestore -F CCRBackup.986.2022.07.29.23.06.19.myquorum.tar.gz -a > Restoring CCR backup > Verifying that GPFS is inactive on quorum nodes > mmsdrrestore: GPFS is still active on myquorum > mmsdrrestore: Unexpected error from mmsdrrestore: CCR restore failed. Return code: 192 > mmsdrrestore: Command failed. Examine previous error messages to determine cause. > > >> On Thu, Jul 28, 2022 at 3:14 PM Olaf Weiser wrote: >> >> >> Hi - >> assuming, you'll run it withou ECE ?!? ... just with replication on the file system level >> ba aware, every time a node goes offline, you 'll have to restart the disks in your filesystem .. This causes a complete scan of the meta data to detect files with missing updates / replication >> >> >> apart from that to your Q : >> you may consider to backup mmsdr >> additionally, take a look to mmsdrrestore, in case you want to restore a nodes's SDR configuration >> >> quick and dirty.. save the content of /var/mmfs may also help you >> >> during the node is "gone".. of course.. the disk is down , after restore of SDR / node's config .. it should be able to start .. >> the rest runs as usual >> >> >> >> Von: gpfsug-discuss im Auftrag von shao feng >> Gesendet: Donnerstag, 28. Juli 2022 09:02 >> An: gpfsug main discussion list >> Betreff: [EXTERNAL] [gpfsug-discuss] add local nsd back to cluster? >> >> This Message Is From an External Sender >> This message came from outside your organization. >> >> Hi all, >> >> I am planning to implement a cluster with a bunch of old x86 machines, the disks are not connected to nodes via the SAN network, instead each x86 machine has some local attached disks. >> The question is regarding node failure, for example only the operating system disk fails and the nsd disks are good. In that case I plan to replace the failing OS disk with a new one and install the OS on it and re-attach these nsd disks to that node, my question is: will this work? how can I add a nsd back to the cluster without restoring data from other replicas since the data/metadata is actually not corrupted on nsd. >> >> Best regards, >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at gpfsug.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org ------------------------------ End of gpfsug-discuss Digest, Vol 126, Issue 21 *********************************************** _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From kraemerf at de.ibm.com Thu Aug 4 17:24:22 2022 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Thu, 4 Aug 2022 16:24:22 +0000 Subject: [gpfsug-discuss] Registration is now open for the next IBM Spectrum Scale Strategy Days Event 2022 (German speaking version) Message-ID: Heads Up for the next IBM Spectrum Scale Strategy Days Event 2022 (German speaking version) Date: 19. - 20. Oct 2022 | IBM Pop-up | MediaPark | Koeln, Germany https://www.ibm.com/de-de/events/spectrum-scale-strategy-days It's not in Stuttgart/Ehningen! It's part of the larger IBM Pop-up event series in Cologne Germany. Please do register before traveling to the event. -frank- Frank Kraemer IBM Senior Technical Specialist Wilhelm-Fay-Str. 34, 65936 Frankfurt, Germany mailto:kraemerf at de.ibm.com Mobile +49171-3043699 IBM Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From shaof777 at gmail.com Wed Aug 24 09:52:51 2022 From: shaof777 at gmail.com (shao feng) Date: Wed, 24 Aug 2022 16:52:51 +0800 Subject: [gpfsug-discuss] replicate setting per fileset? Message-ID: Hi all, Does gpfs support replica settings per fileset? I'm looking at the "File placement rule" of filesystem policy at https://www.ibm.com/docs/en/spectrum-scale/5.1.2?topic=rules-policy-syntax, looks like the "REPLICATE" closure is for this: RULE ['RuleName'] SET POOL 'PoolName' [LIMIT (OccupancyPercentage)] [REPLICATE (DataReplication)] [FOR FILESET ('FilesetName'[,'FilesetName']...)] [ACTION (SqlExpression)] [WHERE SqlExpression] but the experiment seems not work: [root at tsgpfsdev files]# mmlspolicy myfs -L RULE 'SYSTEM_POOL_PLACEMENT_RULE' SET POOL 'system' REPLICATE (2) RULE 'PLACEMENT_RULE_myfileset' SET POOL 'system' REPLICATE (3) FOR FILESET ('myfileset') [root at tsgpfsdev files]# dd if=/dev/urandom of=bigfile3 bs=1MB count=10 10+0 records in 10+0 records out 10000000 bytes (10 MB) copied, 0.226941 s, 44.1 MB/s [root at tsgpfsdev files]# mmlsattr -L bigfile3 file name: bigfile3 metadata replication: 3 max 3 data replication: 2 max 3 <<<================ always 2 immutable: no appendOnly: no flags: storage pool name: system fileset name: myfileset snapshot name: creation time: Wed Aug 24 12:00:28 2022 Misc attributes: ARCHIVE Encrypted: no -------------- next part -------------- An HTML attachment was scrubbed... URL: From luis.bolinches at fi.ibm.com Wed Aug 24 10:16:52 2022 From: luis.bolinches at fi.ibm.com (Luis Bolinches) Date: Wed, 24 Aug 2022 09:16:52 +0000 Subject: [gpfsug-discuss] replicate setting per fileset? In-Reply-To: References: Message-ID: Hi https://www.ibm.com/docs/en/spectrum-scale/5.1.2?topic=management-policy-rules "GPFS evaluates policy rules in order, from first to last, as they appear in the policy. The first rule that matches determines what is to be done with that file." I am assuming you sent the email on the order the rules are applied ... hence works as design. Pls change the order of those rules. -- Yst?v?llisin terveisin/Regards/Saludos/Salutations/Salutacions Luis Bolinches IBM Spectrum Scale development Executive IT Specialist Phone: +358503112585 https://www.credly.com/users/luis-bolinches/badges Ab IBM Finland Oy Laajalahdentie 23 00330 Helsinki Uusimaa - Finland "If you always give you will always have" -- Anonymous On Wed, 2022-08-24 at 16:52 +0800, shao feng wrote: > This Message Is From an External Sender > This message came from outside your organization. > Hi all, > > Does gpfs support replica settings per fileset? I'm looking at the > "File placement rule" of filesystem policy at > https://www.ibm.com/docs/en/spectrum-scale/5.1.2?topic=rules-policy-syntax > , looks like the "REPLICATE" closure is for this: > > RULE ['RuleName'] > SET POOL 'PoolName' > [LIMIT (OccupancyPercentage)] > [REPLICATE (DataReplication)] > [FOR FILESET ('FilesetName'[,'FilesetName']...)] > [ACTION (SqlExpression)] > [WHERE SqlExpression] > > but the experiment seems not work: > > [root at tsgpfsdev files]# mmlspolicy myfs -L > RULE 'SYSTEM_POOL_PLACEMENT_RULE' SET POOL 'system' REPLICATE (2) > > RULE 'PLACEMENT_RULE_myfileset' SET POOL 'system' REPLICATE (3) FOR > FILESET ('myfileset') > > [root at tsgpfsdev files]# dd if=/dev/urandom of=bigfile3 bs=1MB > count=10 > 10+0 records in > 10+0 records out > 10000000 bytes (10 MB) copied, 0.226941 s, 44.1 MB/s > > > [root at tsgpfsdev files]# mmlsattr -L bigfile3 > file name: bigfile3 > metadata replication: 3 max 3 > data replication: 2 max 3 <<<================ always 2 > immutable: no > appendOnly: no > flags: > storage pool name: system > fileset name: myfileset > snapshot name: > creation time: Wed Aug 24 12:00:28 2022 > Misc attributes: ARCHIVE > Encrypted: no > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org? Unless otherwise stated above: Oy IBM Finland Ab PL 265, 00101 Helsinki, Finland Business ID, Y-tunnus: 0195876-3 Registered in Finland From daniel.kidger at hpe.com Wed Aug 24 10:32:41 2022 From: daniel.kidger at hpe.com (Kidger, Daniel) Date: Wed, 24 Aug 2022 09:32:41 +0000 Subject: [gpfsug-discuss] Problems with python dependencies in ./spectrumscale during upgrade Message-ID: Morning I am trying to upgrade a Spectrum Scale cluster from 5.1.1-2 to 5.1.4-1 If I try and use the ./spectrumscale toolkit from 5.1.4-1 it fails with: mmfs]# ./5.1.4.1/ansible-toolkit/spectrumscale --version Traceback (most recent call last): File "./5.1.4.1/ansible-toolkit/spectrumscale", line 23, in from cli.main import commands File "/usr/lpp/mmfs/5.1.4.1/ansible-toolkit/cli/main.py", line 15, in from .install import commands as install_commands File "/usr/lpp/mmfs/5.1.4.1/ansible-toolkit/cli/install.py", line 14, in from espylib import install as install_specscale File "/usr/lpp/mmfs/5.1.4.1/ansible-toolkit/espylib/install.py", line 37, in from .httpserver import http_start File "/usr/lpp/mmfs/5.1.4.1/ansible-toolkit/espylib/httpserver.py", line 41, in import cherrypy File "/usr/lpp/mmfs/5.1.4.1/ansible-toolkit/externallibs/CherryPy-18.6.0/cherrypy/__init__.py", line 73, in from ._cptools import default_toolbox as tools, Tool ModuleNotFoundError: No module named 'cherrypy._cptools' Yet the tookit from the original 5.1.1-2 produces no errors mmfs]# ./5.1.1.2/ansible-toolkit/spectrumscale --version IBM Spectrum Scale Ansible Install Toolkit release: 5.1.1.2 I have RHEL 8.3 (but plan to upgrade to 8.6) if that is relevant Also if relevant my Ansible is: mmfs]# rpm -qa |grep ansible ansible-core-2.12.2-3.1.el8.x86_64 ansible-5.4.0-2.el8.noarch and my python is 3.6.8 Daniel Daniel Kidger HPC Storage Solutions Architect, EMEA daniel.kidger at hpe.com +44 (0)7818 522266 hpe.com [cid:image001.png at 01D8B7A3.EFF765F0] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 2541 bytes Desc: image001.png URL: From luis.bolinches at fi.ibm.com Wed Aug 24 10:45:46 2022 From: luis.bolinches at fi.ibm.com (Luis Bolinches) Date: Wed, 24 Aug 2022 09:45:46 +0000 Subject: [gpfsug-discuss] Problems with python dependencies in ./spectrumscale during upgrade In-Reply-To: References: Message-ID: <361ddd07e134ae68cd8f01e3e0e949420dd094e3.camel@fi.ibm.com> Hi I believe this should be a case to IBM, or whoever provide support for your SW. 5.1.4-1 only support RHEL 8.4,8.5,8.6. I suggest you update your OS *before* trying to install Spectrum Scale on it. -- Yst?v?llisin terveisin/Regards/Saludos/Salutations/Salutacions Luis Bolinches IBM Spectrum Scale development Executive IT Specialist Phone: +358503112585 https://www.credly.com/users/luis-bolinches/badges Ab IBM Finland Oy Laajalahdentie 23 00330 Helsinki Uusimaa - Finland "If you always give you will always have" -- Anonymous On Wed, 2022-08-24 at 09:32 +0000, Kidger, Daniel wrote: > This Message Is From an External Sender > This message came from outside your organization. > Morning > I am trying to upgrade a Spectrum Scale cluster from 5.1.1-2 to > 5.1.4-1 > > If I try and use the ./spectrumscale toolkit from 5.1.4-1 it fails > with: > > mmfs]# ./5.1.4.1/ansible-toolkit/spectrumscale --version > Traceback (most recent call last): > File "./5.1.4.1/ansible-toolkit/spectrumscale", line 23, in > > from cli.main import commands > File "/usr/lpp/mmfs/5.1.4.1/ansible-toolkit/cli/main.py", line 15, > in > from .install import commands as install_commands > File "/usr/lpp/mmfs/5.1.4.1/ansible-toolkit/cli/install.py", line > 14, in > from espylib import install as install_specscale > File "/usr/lpp/mmfs/5.1.4.1/ansible-toolkit/espylib/install.py", > line 37, in > from .httpserver import http_start > File "/usr/lpp/mmfs/5.1.4.1/ansible-toolkit/espylib/httpserver.py", > line 41, in > import cherrypy > File "/usr/lpp/mmfs/5.1.4.1/ansible-toolkit/externallibs/CherryPy- > 18.6.0/cherrypy/__init__.py", line 73, in > from ._cptools import default_toolbox as tools, Tool > ModuleNotFoundError: No module named 'cherrypy._cptools' > > Yet the tookit from the original 5.1.1-2 produces no errors > mmfs]# ./5.1.1.2/ansible-toolkit/spectrumscale --version > IBM Spectrum Scale Ansible Install Toolkit release: 5.1.1.2 > > I have RHEL 8.3 (but plan to upgrade to 8.6) if that is relevant > Also if relevant my Ansible is: > mmfs]# rpm -qa |grep ansible > ansible-core-2.12.2-3.1.el8.x86_64 > ansible-5.4.0-2.el8.noarch > and my python is 3.6.8 > > Daniel > > Daniel Kidger > HPC Storage Solutions Architect, EMEA > daniel.kidger at hpe.com > > +44 (0)7818 522266 > > hpe.com > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org? Unless otherwise stated above: Oy IBM Finland Ab PL 265, 00101 Helsinki, Finland Business ID, Y-tunnus: 0195876-3 Registered in Finland From Renar.Grunenberg at huk-coburg.de Wed Aug 24 10:52:50 2022 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Wed, 24 Aug 2022 09:52:50 +0000 Subject: [gpfsug-discuss] Problems with python dependencies in ./spectrumscale during upgrade In-Reply-To: <361ddd07e134ae68cd8f01e3e0e949420dd094e3.camel@fi.ibm.com> References: <361ddd07e134ae68cd8f01e3e0e949420dd094e3.camel@fi.ibm.com> Message-ID: <306b3c3a3b3e4083911930c8a36f5aff@huk-coburg.de> Hallo Daniel, do make the ./spectrumscale setup -s stuff at first? Renar Grunenberg Abteilung Informatik - Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ======================================================================= HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. Helen Reck, Dr. J?rg Rheinl?nder, Thomas Sehn, Daniel Thomas. ======================================================================= Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ======================================================================= -----Urspr?ngliche Nachricht----- Von: gpfsug-discuss Im Auftrag von Luis Bolinches Gesendet: Mittwoch, 24. August 2022 11:46 An: gpfsug-discuss at gpfsug.org Betreff: Re: [gpfsug-discuss] Problems with python dependencies in ./spectrumscale during upgrade Hi I believe this should be a case to IBM, or whoever provide support for your SW. 5.1.4-1 only support RHEL 8.4,8.5,8.6. I suggest you update your OS *before* trying to install Spectrum Scale on it. -- Yst?v?llisin terveisin/Regards/Saludos/Salutations/Salutacions Luis Bolinches IBM Spectrum Scale development Executive IT Specialist Phone: +358503112585 https://www.credly.com/users/luis-bolinches/badges Ab IBM Finland Oy Laajalahdentie 23 00330 Helsinki Uusimaa - Finland "If you always give you will always have" -- Anonymous On Wed, 2022-08-24 at 09:32 +0000, Kidger, Daniel wrote: > This Message Is From an External Sender > This message came from outside your organization. > Morning > I am trying to upgrade a Spectrum Scale cluster from 5.1.1-2 to > 5.1.4-1 > > If I try and use the ./spectrumscale toolkit from 5.1.4-1 it fails > with: > > mmfs]# ./5.1.4.1/ansible-toolkit/spectrumscale --version > Traceback (most recent call last): > File "./5.1.4.1/ansible-toolkit/spectrumscale", line 23, in > > from cli.main import commands > File "/usr/lpp/mmfs/5.1.4.1/ansible-toolkit/cli/main.py", line 15, > in > from .install import commands as install_commands > File "/usr/lpp/mmfs/5.1.4.1/ansible-toolkit/cli/install.py", line > 14, in > from espylib import install as install_specscale > File "/usr/lpp/mmfs/5.1.4.1/ansible-toolkit/espylib/install.py", > line 37, in > from .httpserver import http_start > File "/usr/lpp/mmfs/5.1.4.1/ansible-toolkit/espylib/httpserver.py", > line 41, in > import cherrypy > File "/usr/lpp/mmfs/5.1.4.1/ansible-toolkit/externallibs/CherryPy- > 18.6.0/cherrypy/__init__.py", line 73, in > from ._cptools import default_toolbox as tools, Tool > ModuleNotFoundError: No module named 'cherrypy._cptools' > > Yet the tookit from the original 5.1.1-2 produces no errors > mmfs]# ./5.1.1.2/ansible-toolkit/spectrumscale --version > IBM Spectrum Scale Ansible Install Toolkit release: 5.1.1.2 > > I have RHEL 8.3 (but plan to upgrade to 8.6) if that is relevant > Also if relevant my Ansible is: > mmfs]# rpm -qa |grep ansible > ansible-core-2.12.2-3.1.el8.x86_64 > ansible-5.4.0-2.el8.noarch > and my python is 3.6.8 > > Daniel > > Daniel Kidger > HPC Storage Solutions Architect, EMEA > daniel.kidger at hpe.com > > +44 (0)7818 522266 > > hpe.com > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org Unless otherwise stated above: Oy IBM Finland Ab PL 265, 00101 Helsinki, Finland Business ID, Y-tunnus: 0195876-3 Registered in Finland _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org From luis.bolinches at fi.ibm.com Wed Aug 24 10:51:55 2022 From: luis.bolinches at fi.ibm.com (Luis Bolinches) Date: Wed, 24 Aug 2022 09:51:55 +0000 Subject: [gpfsug-discuss] Problems with python dependencies in ./spectrumscale during upgrade In-Reply-To: <361ddd07e134ae68cd8f01e3e0e949420dd094e3.camel@fi.ibm.com> References: <361ddd07e134ae68cd8f01e3e0e949420dd094e3.camel@fi.ibm.com> Message-ID: <59def7546eefb1900e4120fd06da12c1492d207e.camel@fi.ibm.com> Sorry for the double mail Those I mentioned are for RHEL8. 5.1.4-1 supports other OS and RHEL 7.9 as well. Depending on the platform etc ... there might be some limitations on what is and what is not supported; FAQ and or support case when in doubt is likely the best approach -- Yst?v?llisin terveisin/Regards/Saludos/Salutations/Salutacions Luis Bolinches IBM Spectrum Scale development Executive IT Specialist Phone: +358503112585 https://www.credly.com/users/luis-bolinches/badges Ab IBM Finland Oy Laajalahdentie 23 00330 Helsinki Uusimaa - Finland "If you always give you will always have" -- Anonymous On Wed, 2022-08-24 at 09:45 +0000, Luis Bolinches wrote: > Hi > > I believe this should be a case to IBM, or whoever provide support > for > your SW. > > 5.1.4-1 only support RHEL 8.4,8.5,8.6. I suggest you update your OS > *before* trying to install Spectrum Scale on it. > Unless otherwise stated above: Oy IBM Finland Ab PL 265, 00101 Helsinki, Finland Business ID, Y-tunnus: 0195876-3 Registered in Finland From scl at virginia.edu Mon Aug 29 18:52:36 2022 From: scl at virginia.edu (Losen, Stephen C (scl)) Date: Mon, 29 Aug 2022 17:52:36 +0000 Subject: [gpfsug-discuss] mmchfs -k nfs4 impacts? Message-ID: Hi, We want to export SMB shares via Spectrum Scale CES nodes. The filesystem has ACL style set to ?all? and we must set it to nfs4 with mmchfs -k nfs4 . The filesystem currently has millions of files with POSIX ACLs. Will this mmchfs command result in any performance impact, such as a major rewrite of metadata? Thanks. Steve Losen Research Computing University of Virginia scl at virginia.edu 434-924-0640 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Mon Aug 29 22:05:47 2022 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Mon, 29 Aug 2022 22:05:47 +0100 Subject: [gpfsug-discuss] mmchfs -k nfs4 impacts? In-Reply-To: References: Message-ID: <59725e39-f365-4d3e-7a53-70cdc671d21b@strath.ac.uk> On 29/08/2022 18:52, Losen, Stephen C (scl) wrote: > Hi, > > We want to export ?SMB shares via Spectrum Scale CES nodes. The > filesystem has ACL style set to ?all? and we must set it to nfs4 with > mmchfs -k nfs4 . The filesystem currently has millions of files with > POSIX ACLs. Will this mmchfs command result in any performance impact, > such as a major rewrite of metadata? Thanks. > The operation will complete very quickly. The bigger risk is that your existing ACL's will go "puff". As I understand it GPFS stores each different ACL only once and then it points each file to the ACL. When you change the ACL type they get nuked from recollection but it is a decade now since I played around with that. I stick to NFSv4 ACL's since then because well POSIX ACL's are a bit naff frankly. I personally would have thought "-k samba" would have been preferable for your use case mine you ;-) JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From scl at virginia.edu Mon Aug 29 22:18:15 2022 From: scl at virginia.edu (Losen, Stephen C (scl)) Date: Mon, 29 Aug 2022 21:18:15 +0000 Subject: [gpfsug-discuss] mmchfs -k nfs4 impacts? In-Reply-To: <59725e39-f365-4d3e-7a53-70cdc671d21b@strath.ac.uk> References: <59725e39-f365-4d3e-7a53-70cdc671d21b@strath.ac.uk> Message-ID: <09A7D1EF-434B-49A2-9893-AADCFC39C0A7@virginia.edu> Hi Jonathan, Thanks for your reply. I didn't see "-k samba" in the docs. I'll look some more. Also I didn't mention that we also need NFSv4 access and native GPFS, this will not be SMB-only. It will actually be mostly GPFS native. I don't think existing ACLs will be adversely affected. In a test filesystem with "-k all" I set some POSIX ACLs and converted the filesystem to "-k nfs4" and the result looked reasonable. Plus I ran mmgetacl -k nfs4 on numerous files/dirs with POSIX ACLs in our production filesystem and the results looked promising. Glad to know that switching the filesystem to -k nfs4 won't be a huge performance hit. Steve Losen Research Computing University of Virginia scl at virginia.edu 434-924-0640 ?On 8/29/22, 5:07 PM, "gpfsug-discuss on behalf of Jonathan Buzzard" wrote: On 29/08/2022 18:52, Losen, Stephen C (scl) wrote: > Hi, > > We want to export SMB shares via Spectrum Scale CES nodes. The > filesystem has ACL style set to ?all? and we must set it to nfs4 with > mmchfs -k nfs4 . The filesystem currently has millions of files with > POSIX ACLs. Will this mmchfs command result in any performance impact, > such as a major rewrite of metadata? Thanks. > The operation will complete very quickly. The bigger risk is that your existing ACL's will go "puff". As I understand it GPFS stores each different ACL only once and then it points each file to the ACL. When you change the ACL type they get nuked from recollection but it is a decade now since I played around with that. I stick to NFSv4 ACL's since then because well POSIX ACL's are a bit naff frankly. I personally would have thought "-k samba" would have been preferable for your use case mine you ;-) JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org From jonathan.buzzard at strath.ac.uk Tue Aug 30 09:47:36 2022 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 30 Aug 2022 09:47:36 +0100 Subject: [gpfsug-discuss] mmchfs -k nfs4 impacts? In-Reply-To: <09A7D1EF-434B-49A2-9893-AADCFC39C0A7@virginia.edu> References: <59725e39-f365-4d3e-7a53-70cdc671d21b@strath.ac.uk> <09A7D1EF-434B-49A2-9893-AADCFC39C0A7@virginia.edu> Message-ID: On 29/08/2022 22:18, Losen, Stephen C (scl) wrote: > > Hi Jonathan, Thanks for your reply. I didn't see "-k samba" in the > docs. You won't it's "undocumented" in the manual page but documented right at the top of the mmchfs Korn shell script as being an option, but not description as to what it actually does. My best guess is that it makes the NFSv4.1 ACL's behave more like NTFS ACL's. Especially in combination with the no directory traversal option. I seem to recall that option is documented too, but it is rather self explanatory. I think these where all put in for the old SONAS storage system that IBM used to sell to make it more "MS Windows" like. This was all before there was such things as "protocol" nodes of course. > I'll look some more. Also I didn't mention that we also need > NFSv4 access and native GPFS, this will not be SMB-only. It will > actually be mostly GPFS native. I don't think existing ACLs will be > adversely affected. From recollection think again. At best the existing POSIX ACL's will get converted to NFSv4 ACL's. From recollection things go screwy when you have default POSIX ACL's because they don't map to NFSv4 and then you create new files now what. Of course this might have changed or I might have got it wrong as this was experimentation I did over a decade ago on probably GPFS 3.0 or 3.1 I would strongly recommend creating a test GPFS filesystem adding some POSIX ACL's in then converting it to NFSv4 only and checking out how the ACL's work with the creation of new files. > In a test filesystem with "-k all" I set some > POSIX ACLs and converted the filesystem to "-k nfs4" and the result > looked reasonable. Plus I ran mmgetacl -k nfs4 on numerous > files/dirs with POSIX ACLs in our production filesystem and the > results looked promising. > > Glad to know that switching the filesystem to -k nfs4 won't be a huge > performance hit. > I have taken the approach since my experimentation in circa 2010 that you can do everything you can do with POSIX ACL's with NFSv4 ACLS so why bother with the former. Just stick to the latter and you won't have problems down the line switching to NFSv4 ACL's. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From scl at virginia.edu Tue Aug 30 10:58:21 2022 From: scl at virginia.edu (Losen, Stephen C (scl)) Date: Tue, 30 Aug 2022 09:58:21 +0000 Subject: [gpfsug-discuss] mmchfs -k nfs4 impacts? In-Reply-To: References: <59725e39-f365-4d3e-7a53-70cdc671d21b@strath.ac.uk> <09A7D1EF-434B-49A2-9893-AADCFC39C0A7@virginia.edu> Message-ID: <4B89784C-0CB5-4F39-8F0A-60C3CF4659CF@virginia.edu> Hi Jonathan, We are running SS 5.0.5.7 and a POSIX default ACL is converted to nfs4 ACEs with the FileInherit:DirInherit:InheritOnly flags. The corresponding active POSIX ACL is converted to nfs4 ACEs with no inheritance flags. So the two POSIX ACLs result in a single, rather large nfs4 ACL. Steve Losen Research Computing University of Virginia scl at virginia.edu 434-924-0640 ?On 8/30/22, 4:48 AM, "gpfsug-discuss on behalf of Jonathan Buzzard" wrote: On 29/08/2022 22:18, Losen, Stephen C (scl) wrote: > > Hi Jonathan, Thanks for your reply. I didn't see "-k samba" in the > docs. You won't it's "undocumented" in the manual page but documented right at the top of the mmchfs Korn shell script as being an option, but not description as to what it actually does. My best guess is that it makes the NFSv4.1 ACL's behave more like NTFS ACL's. Especially in combination with the no directory traversal option. I seem to recall that option is documented too, but it is rather self explanatory. I think these where all put in for the old SONAS storage system that IBM used to sell to make it more "MS Windows" like. This was all before there was such things as "protocol" nodes of course. > I'll look some more. Also I didn't mention that we also need > NFSv4 access and native GPFS, this will not be SMB-only. It will > actually be mostly GPFS native. I don't think existing ACLs will be > adversely affected. From recollection think again. At best the existing POSIX ACL's will get converted to NFSv4 ACL's. From recollection things go screwy when you have default POSIX ACL's because they don't map to NFSv4 and then you create new files now what. Of course this might have changed or I might have got it wrong as this was experimentation I did over a decade ago on probably GPFS 3.0 or 3.1 I would strongly recommend creating a test GPFS filesystem adding some POSIX ACL's in then converting it to NFSv4 only and checking out how the ACL's work with the creation of new files. > In a test filesystem with "-k all" I set some > POSIX ACLs and converted the filesystem to "-k nfs4" and the result > looked reasonable. Plus I ran mmgetacl -k nfs4 on numerous > files/dirs with POSIX ACLs in our production filesystem and the > results looked promising. > > Glad to know that switching the filesystem to -k nfs4 won't be a huge > performance hit. > I have taken the approach since my experimentation in circa 2010 that you can do everything you can do with POSIX ACL's with NFSv4 ACLS so why bother with the former. Just stick to the latter and you won't have problems down the line switching to NFSv4 ACL's. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org From helge.hauglin at usit.uio.no Tue Aug 30 11:02:26 2022 From: helge.hauglin at usit.uio.no (Helge Hauglin) Date: Tue, 30 Aug 2022 12:02:26 +0200 Subject: [gpfsug-discuss] mmchfs -k nfs4 impacts? References: <59725e39-f365-4d3e-7a53-70cdc671d21b@strath.ac.uk> <09A7D1EF-434B-49A2-9893-AADCFC39C0A7@virginia.edu> Message-ID: Hi Stephen. > Also I didn't mention that we also need NFSv4 access and > native GPFS, this will not be SMB-only. It will actually be mostly > GPFS native. Beware that when writing via SMB, samba default permissions will be applied to new files and folders, which might not give the permissions your users need. On our CES clusters, the samba default permission is 0755 / 0744 [1]. We want either 0770 or 0775 by default. This we get by setting these permissions in NFSv4 ACLs in relevant folders, plus turn on inheritance for the ACEs to new files and folders. The side effect of having NFSv4 ACLs with inheritance is that 'umask' in processes writing via GPFS or NFS is ignored. I have not tried. but I guees it works similarly with POSIX ACLs. [1] | # testparm -s -v | grep mask | Load smb config files from /var/mmfs/ces/smb.conf | [...] | create mask = 0744 | directory mask = 0755 > I don't think existing ACLs will be adversely > affected. In a test filesystem with "-k all" I set some POSIX ACLs and > converted the filesystem to "-k nfs4" and the result looked > reasonable. Plus I ran mmgetacl -k nfs4 on numerous files/dirs with > POSIX ACLs in our production filesystem and the results looked > promising. I would recommend standardizing on one type of ACLs, which will give you less variants to deal with, simplifying administration. -- Regards, Helge Hauglin ---------------------------------------------------------------- Mr. Helge Hauglin, Senior Engineer System administrator Center for Information Technology, University of Oslo, Norway From djoe at us.ibm.com Wed Aug 31 17:38:57 2022 From: djoe at us.ibm.com (Joe Dorio) Date: Wed, 31 Aug 2022 16:38:57 +0000 Subject: [gpfsug-discuss] Joe Dorio Intro Message-ID: Hi, my name is Joe Dorio and I am a Brand Technical Specialist with IBM. I live in Elmsford, NY, which is about 20 miles north of NYC. My primary focus for the past few years has been on IBM Cloud Object Storage, since IBM?s purchase of Cleversafe. By becoming a part of gpfsug I am hopeful I can learn and better understand Spectrum Scale use cases. Thanks. Joe Dorio Senior Brand Technical Specialist Cloud Object Storage - U.S. National Markets Cell: 914.246.4763 E-mail: djoe at us.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: