From geraint.north at uk.ibm.com Thu Feb 2 12:09:01 2012 From: geraint.north at uk.ibm.com (Geraint North) Date: Thu, 2 Feb 2012 12:09:01 +0000 Subject: [gpfsug-discuss] Intro: Geraint North (IBM) Message-ID: Hi, I work for IBM in the Manchester Lab. We work on IBM's storage products, including SVC, V7000 and V7000 Unified. V7000 Unified is probably of the most interest to this group - it combines a smass SONAS configuration with a V7000 block storage device to provide a unified way of managing file and block storage. The underlying filesystem is GPFS, although you have to dig fairly deep into the documentation to find that out! http://www-03.ibm.com/systems/storage/disk/storwize_v7000/ Thanks, Geraint. Geraint North Senior Engineer and Master Inventor IBM Manchester Lab From Jez.Tucker at rushes.co.uk Mon Feb 6 10:00:34 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 6 Feb 2012 10:00:34 +0000 Subject: [gpfsug-discuss] Hello list members Message-ID: <3147C311DEF9304ABFB764A6D5C1B6AE760D3E60@WARVWEXC1.uk.deluxe-eu.com> Hello all, I'm the Senior Systems Administrator of Rushes, part of the Deluxe Entertainment Group. Market Sector: Media - Film and Television Post Production URLs: http://www.rushes.co.uk http://www.bydeluxe.com GPFS System: Linux NSDs with Linux TSM 6 HSM. Serves Linux and Windows over 8Gb FC. Linux, Windows and OSX via CTDB 10Gbit. Uses: - Multiple real-time FC streams for Telecine grading (http://en.wikipedia.org/wiki/Color_grading) - Collaborative storage for multiple departments - Servicing render farm for CG and Motion Graphics departments --- Jez Tucker Senior SysAdmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. From orlando.richards at ed.ac.uk Fri Feb 10 10:16:33 2012 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Fri, 10 Feb 2012 10:16:33 +0000 Subject: [gpfsug-discuss] Intro In-Reply-To: <3147C311DEF9304ABFB764A6D5C1B6AE760D3E60@WARVWEXC1.uk.deluxe-eu.com> References: <3147C311DEF9304ABFB764A6D5C1B6AE760D3E60@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <4F34EE81.8010007@ed.ac.uk> Hi all, I'm the systems manager for the University of Edinburgh's research compute and data services. We use GPFS for two primary purposes: 1. HPC storage (offering high performance GPFS to our HPC cluster worker nodes) 2. General purpose (unstructured) large scale storage services (offering CIFS, NFS and SSHFS directly to our end users) Our services are used across the University, and so we have a very broad range of users and use cases. We've been using GPFS since 2007, and have picked up plenty of war stories and success stories along the way! -- Orlando. -- -- Dr Orlando Richards Information Services IT Infrastructure Division Unix Section Tel: 0131 650 4994 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From david.l at ramsan.com Fri Feb 10 10:24:40 2012 From: david.l at ramsan.com (David Lawrence) Date: Fri, 10 Feb 2012 10:24:40 +0000 Subject: [gpfsug-discuss] Texas Memory Systems Metadata Proposal - Request for interested parties Message-ID: <6F976E7BAFE63C4E9C5CC3810107DBD510724067@DPE-G51.texmemsys.com> Dear GPFS User Group members. No one wants the GPFS user group to turn into a sales marketing list, so this will be the one and only email on this subject from me in 2012. I was at the user group meeting in Warwick when every speaker seemed to mention solid state storage as a requirement for GPFS metadata and I met a number of people interested in using our systems for metadata at MEW. I am trying to create an Academic and Research offering in the UK, which would allow for academic and government research institutions to purchase a pair of RamSan systems specifically for metadata, at a special price. (FYI - Currently a pair of 2TB systems each providing perhaps 250,000 IOPS would be close to ?108,000. I would hope to be able to offer this solution for something below ?50k, if there is enough interest from the community. ) TMS normally does not discount at all, so I need your declaration of interest to create a large enough community to initiate an exception. This email is therefore trying to gauge how much interest there is in having a special offering for this metadata solution. May I ask that you send me a personal email at the address below, if you would be interested (no commitment) in such an offering. Thank you. Kind regards David Lawrence UK Country Manager __________________________ Texas Memory Systems The World's Fastest Storage? Office: +44 (0) 1179 237 984 Mobile: +44 (0) 788 44 98 220 David.L at ramsan.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri Feb 10 13:21:16 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 10 Feb 2012 13:21:16 +0000 Subject: [gpfsug-discuss] Texas Memory Systems Metadata Proposal - Request for interested parties In-Reply-To: <6F976E7BAFE63C4E9C5CC3810107DBD510724067@DPE-G51.texmemsys.com> References: <6F976E7BAFE63C4E9C5CC3810107DBD510724067@DPE-G51.texmemsys.com> Message-ID: <3147C311DEF9304ABFB764A6D5C1B6AE760E32C1@WARVWEXC1.uk.deluxe-eu.com> Hello, I'll step in at this point. So yes - you're quite right, nobody wants that. I.E. no hard sell from vendors, though vendors are most welcome on the list. I think what would have been most helpful is if you point to case studies of SSD / RAMSAN with GPFS: http://www.hpcadvisorycouncil.com/events/2011/european_workshop/pdf/17_HPC_Storage.pdf (page 15) http://www.violin-memory.com/images/IBM-Violin-GPFS-Record.pdf?d=1 I.E. justify your sales splurge and let the users come back to you :-) If there's more recent, relevant, [and independent??] case studies then feel free to point them out. Regards, Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of David Lawrence Sent: 10 February 2012 10:25 To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Texas Memory Systems Metadata Proposal - Request for interested parties Dear GPFS User Group members. No one wants the GPFS user group to turn into a sales marketing list, so this will be the one and only email on this subject from me in 2012. I was at the user group meeting in Warwick when every speaker seemed to mention solid state storage as a requirement for GPFS metadata and I met a number of people interested in using our systems for metadata at MEW. I am trying to create an Academic and Research offering in the UK, which would allow for academic and government research institutions to purchase a pair of RamSan systems specifically for metadata, at a special price. (FYI - Currently a pair of 2TB systems each providing perhaps 250,000 IOPS would be close to ?108,000. I would hope to be able to offer this solution for something below ?50k, if there is enough interest from the community. ) TMS normally does not discount at all, so I need your declaration of interest to create a large enough community to initiate an exception. This email is therefore trying to gauge how much interest there is in having a special offering for this metadata solution. May I ask that you send me a personal email at the address below, if you would be interested (no commitment) in such an offering. Thank you. Kind regards David Lawrence UK Country Manager __________________________ Texas Memory Systems The World's Fastest Storage? Office: +44 (0) 1179 237 984 Mobile: +44 (0) 788 44 98 220 David.L at ramsan.com Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bevans at canditmedia.co.uk Sat Feb 11 00:33:42 2012 From: bevans at canditmedia.co.uk (Barry Evans) Date: Sat, 11 Feb 2012 00:33:42 +0000 Subject: [gpfsug-discuss] Hello, My Name Is... Message-ID: <3A5FCB80-8B34-49CC-910B-B46E3E47B441@canditmedia.co.uk> Barry Evans. GPFS is my profession and sadly, my hobby. I am the Technical Director of CandIT Media UK Ltd, an ISV/Consultancy that uses GPFS to provide IP-based central storage for tapeless digital media workflows (plug over) and the owner of NSMM Ltd, a storage consultancy focused primarily on research and HPC (not really a plug) and have been breaking GPFS since 2004 (opposite of plug - break means break! sorry...). I cut my teeth in HPC at OCF, the UK's most awesomest HPC integrator, now I mostly sell my wares in Media and Entertainment. While I do not officially use GPFS (my user experience is limited to the Netgear ReadyNas Duo behind my TV, currently 4% full), I spend a great deal of my time obsessing over your systems while you sleep. So I guess that sort of counts? I love your GPFS setup, simply because it runs GPFS. Whoever you may be. Really, it's very sad. I live in the north of England, play in the south, support Welsh rugby and can BBQ a very tasty rack of ribs. It's very confusing, but I'm happy to discuss how such an awkward situation can come to be in private over a pint of cheap lager. While I am spouting, I'd like to give a HUGE HUGE HUGE thanks to Jez for putting his hand up for chairman of the group - he's an overly outspoken chap with quite enough on his hands without GPFSUG on top - as a user he has always been dedicated to making GPFS better than it already is and he will be, without any doubt, a fantastic voice 'of the people'. While GPFS many have found it's way in HPC, we must not forget that the 'mm' in every GPFS command stands for 'multi-media' and Mr Tucker was one of the very few people in the Soho post-production community to see past the GUI-dripping allure of StorNext and Isilon and place his faith in the greatest, if not most temperamental, filesystem ever invented! It was always a dream of mine and George's to have commercial orgs network with academia/research within this group and it's clear this is taking place. Goal achieved, happy me. Hope to see you all again soon! -Barry From mail at arif-ali.co.uk Mon Feb 13 15:12:02 2012 From: mail at arif-ali.co.uk (Arif Ali) Date: Mon, 13 Feb 2012 15:12:02 +0000 Subject: [gpfsug-discuss] Intro from me ... Message-ID: Hi all, As you have gathered, my name is Arif Ali; my role here at OCF is a HPC System Architect. I have been in HPC since joining OCF in 2003, and have ever been integrating HPC based clusters in the UK. As part of my integration a small percentage also requires the set-up and maintenance of GPFS with multi-clustering etc...; and most of the software I install/integrate on the GPFS file-systems. My main work revolves around cluster installation and integration using xCAT (which I live and breath). you can find me on #xcat on freenode for any assistance there As part of this group, I am maintaining the infrastructure of gpfsug i.e. website, mailing list, domain, e-mails etc...; so if you have any issues, then please don't hesitate in contacting me with any of the issues. Ah, finally, in my spare time I like to develop with xCAT (ubuntu); hopefully going to start debian integration soon. and I also do some Android development -- Arif Ali catch me on freenode IRC, username: arif-ali From ANDREWD at uk.ibm.com Fri Feb 17 10:30:47 2012 From: ANDREWD at uk.ibm.com (Andrew Downes1) Date: Fri, 17 Feb 2012 10:30:47 +0000 Subject: [gpfsug-discuss] Andrew Downes intro Message-ID: Hi all, I'm Andrew Downes, a 12-year IBMer who joined from Sequent and was therefore configuring SANs for my clients before they were called SANs... they were fibre-channel networks based on Brocade Silkworm 2000s, don't you just love marketing? I've been aware of GPFS capability since I started configuring RS/6000 SP frames, but I've not felt much client pull in the industry segments I've worked until the recent explosion of unstructured data in all industries. My current job title is systems architect and as such I've recently come across several clients who are crying out for a cluster file system that's capable of running natively on the application server and/or migrating data to tape when not in use. Many of these clients would never think of IBM as a storage solutions company. Marketing again, grrr. So, I'm here to absorb what others are doing, become part of the community if you'll generously permit, and offer what experience I have of designing solutions. I live in Maidenhead, I'm married and have two lovely little girls which given that you know I work and would assume that I sleep (not enough) tells you most of what there is currently to know about me... hopefully I'll rediscover hobbies as the girls grow up, some have suggested my weekend diary may start to free up around the year 2030. So my objective is still to be fit enough then to go back to rowing on the Thames and cycling in the Chilterns ;-) Currently I am enjoying Lego (Duplo), so maybe that will suit my fitness levels better, but I'm also determined one of the girls will like Scalextric! I look forward to meeting most of you over time. Regards, Andrew ------------------ Andrew Downes MBCS CITP, South Territory, IBM U.K., External Telephone: 07764 664439 (mob) Internal IBM Telephone: 37 276649 (VOIP mobex) mailto:andrewd at uk.ibm.com (use same ID for Windows Live Messenger) Postal address: NH3W, PO Box 32, Normandy House, Bunnian Place, Basingstoke, RG21 7EJ Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 9409 bytes Desc: S/MIME Cryptographic Signature URL: From marting at storagebod.com Wed Feb 22 18:25:24 2012 From: marting at storagebod.com (Martin Glassborow) Date: Wed, 22 Feb 2012 18:25:24 +0000 Subject: [gpfsug-discuss] Introduction Message-ID: Hi, I guess some people on the list might know who I am, my name is Martin Glassborow and I look after the Broadcast Storage Team at Sky; we have a variety of storage technologies but the core of our file-based workflow (misnamed Tapeless) is GPFS and TSM but we don't use the ILM functionality; the data movement is all controlled by the application. But I'm also the storage blogger more commonly known as 'Storagebod' ( http://www.storagebod.com). I've had involvement with GPFS before it was GPFS and it's now interesting to see it begin to push into areas of use which could almost be called mainstream. Like many, I believe that GPFS is probably IBM's best kept secret and it's about time it was shown some real love and respect; I constantly tell some of my peers out in the general IT market that they need GPFS; it'll solve a huge number of their problems and that it trashes pretty every other cluster file-system out there. I also organise a non-regular event known as #storagebeers; it tends to be in central London, although there are now #storagebeers all over the globe; it's a chance for storage folk to get together, drink beer and swap tall tales! It's vendor neutral and a sales-free zone; although vendors are welcome and they are welcome to buy beers. So that's me.... Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri Feb 24 17:14:01 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 24 Feb 2012 17:14:01 +0000 Subject: [gpfsug-discuss] GPFSUG Git Repo now available In-Reply-To: <3147C311DEF9304ABFB764A6D5C1B6AE81403BB3@WARVWEXC2.uk.deluxe-eu.com> References: <3147C311DEF9304ABFB764A6D5C1B6AE81403BB3@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <3147C311DEF9304ABFB764A6D5C1B6AE81403BD4@WARVWEXC2.uk.deluxe-eu.com> Hello GPFSUG peeps, I've created a git repo for gpfsug related tools. You can access it here : https://github.com/tucks/gpfsug-tools Or via a link from the gpfsug website. I'm just doing final testing on a couple of things and will make the files public in the next week or so. Meanwhile, if you have any useful things to add (even simple scripts or example policies) please feel free to commit. Obviously - no NDA / copyright material. Open sourced / OK'd for release only please. Regards, Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri Feb 24 17:28:37 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 24 Feb 2012 17:28:37 +0000 Subject: [gpfsug-discuss] GPFSUG Git Repo now available - repo path change Message-ID: <3147C311DEF9304ABFB764A6D5C1B6AE81403BFF@WARVWEXC2.uk.deluxe-eu.com> Hello all It occurred to me that it should not be setup under my own account. Hence the repo is now at: https://github.com/gpfsug/gpfsug-tools Regards Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: