[gpfsug-discuss] Forcing which node gets expelled?
Felipe Knop
knop at us.ibm.com
Tue Oct 25 14:02:39 BST 2016
All,
As Bob Oesterlin indicated, it is possible to define an expel script (see
/usr/lpp/mmfs/samples/expelnode.sample) to control which of the two nodes
to get expelled. The script can also be used to issue alerts, etc.
The current policy used (before the script is invoked) when deciding which
node to expel is the following:
1. quorum nodes over non-quorum nodes
2. local nodes over remote nodes
3. manager-capable nodes over non-manager-capable nodes
4. nodes managing more FSs over nodes managing fewer FSs
5. NSD server over non-NSD server
Otherwise, expel whoever joined the cluster more recently.
The statement below from Dr. Uwe Falke is also correct: addressing the
network connectivity is the better long-term approach, but the callback
script could be used to control which node to expel.
Felipe
----
Felipe Knop knop at us.ibm.com
GPFS Development and Security
IBM Systems
IBM Building 008
2455 South Rd, Poughkeepsie, NY 12601
(845) 433-9314 T/L 293-9314
From: "Uwe Falke" <UWEFALKE at de.ibm.com>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: 10/25/2016 08:32 AM
Subject: Re: [gpfsug-discuss] Forcing which node gets expelled?
Sent by: gpfsug-discuss-bounces at spectrumscale.org
Usually, the cluster mgr, receiving a complaint from a node about another
node being gone, checks its own connection to that other node. If that is
positive it expells the requester, if not it follows the request and
expells the other node.
AFAIK, there are some more subtle algorithms in place if managers or
quorum nodes are affected. Maybe that can be used to protect certain nodes
from getting expelled by assigning some role in the cluster to them. I do
however not know these exactly.
That means: it is not easily controllable which one gets expelled.
It is better to concentrate on fixing your connectivity issues, as GPFS
will not feel comfortable in such a unreliable environment anyway.
Mit freundlichen Grüßen / Kind regards
Dr. Uwe Falke
IT Specialist
High Performance Computing Services / Integrated Technology Services /
Data Center Services
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland
Rathausstr. 7
09111 Chemnitz
Phone: +49 371 6978 2165
Mobile: +49 175 575 2877
E-Mail: uwefalke at de.ibm.com
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland Business & Technology Services GmbH / Geschäftsführung:
Frank Hammer, Thorsten Moehring
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart,
HRB 17122
From: Matt Thorpe <matt.thorpe at bodleian.ox.ac.uk>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: 10/25/2016 02:05 PM
Subject: [gpfsug-discuss] Forcing which node gets expelled?
Sent by: gpfsug-discuss-bounces at spectrumscale.org
Hi,
We are in the process of diagnosing a networking issue that is causing 2
of our 6 node GPFS cluster to expel each other (it appears they experience
a temporary network connection outage and lose contact with each other).
At present it's not consistent which gets expelled by the cluster manager,
and I wondered if there was any way to always force a specific node to be
expelled in this situation?
Thanks and best regards,
Matt
--------
Matt Thorpe | BDLSS Systems Administrator
Bodleian Libraries Osney One Building, Osney Mead, Oxford, OX2 0EW
matt.thorpe at bodleian.ox.ac.uk | 01865 (2)80027
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20161025/3dd11db6/attachment.htm>
More information about the gpfsug-discuss
mailing list