From LRTarbox at uams.edu Thu Oct 19 22:07:27 2023 From: LRTarbox at uams.edu (Tarbox, Lawrence) Date: Thu, 19 Oct 2023 21:07:27 +0000 Subject: [gpfsug-discuss] Introduction Message-ID: The auto-generated welcome e-mail to this list asked me to introduce myself. I am an Associate Professor in Biomedical Informatics at the University of Arkansas for Medical Sciences (UAMS), Arkansas's primary medical school, located in Little Rock, Arkansas, USA. My research focus is imaging informatics, and in particular large image archives. We currently operate The Cancer Imaging Archive (TCIA - https://cancerimagingarchive.net) on behalf of the National Cancer Institute, as well as several other image archives. In addition to teaching and research I am the Director of High Performance Computing services at UAMS, where we run a small circa 9,000 core Beowulf cluster named Grace with a 2 PB Spectrum Scale storage system. We chose Spectrum Scale due to its small file I/O performance. Much of the load on Grace is genome sequencing, which has the reputation of using tons of smallish files. Spectrum Scale has served us well in this regard. It was an OEM license for a DDN Gridscaler, which we have converted to permanent regular Spectrum Scale licenses (i.e., we can move the licenses to new hardware). We also operate a circa 6 PB Research Object Storage System named ROSS, which is based on other technology (i.e., is not Spectrum Scale). -------- Lawrence Tarbox, Ph.D., Dept. of Biomedical Informatics, Univ. of Arkansas for Medical Sciences Associate Professor and Director of the UAMS Center for High Performance Computing Architect for The Cancer Imaging Archive (TCIA) [cancerimagingarchive.net] and PRISM [prismtools.dev] Former User Co-Chair of the DICOM Standards Committee [dicomstandard.org] mailto:LTarbox at uams.edu +1.314.681-2752 ---------------------------------------------------------------------- Confidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From TROPPENS at de.ibm.com Fri Oct 27 10:03:03 2023 From: TROPPENS at de.ibm.com (Ulf Troppens) Date: Fri, 27 Oct 2023 09:03:03 +0000 Subject: [gpfsug-discuss] Storage Scale User Meeting @ SC23 Message-ID: Greetings! Registration for the Storage Scale User Group Meeting @ SC23 is no open. We have an exciting agenda covering user stories, roadmap updates, insights into potential future product enhancements, plus access to IBM experts and your peers. Please note that the event location has changed: Sunday, November 12th, 2023 - 12:30-18:00 The Grand Hyatt Denver We look forward to welcoming you to this event. The user meeting is followed by a Get Together to continue the discussion. Please register here: https://www.spectrumscaleug.org/event/storage-scale-user-meeting-sc23/ Best, Ulf 12:30-12:40 Welcome 12:40-13:00 Storage Scale Strategy Update 13:00-13:20 Partner Talk - TBD 13:20-13:40 What is new in Storage Scale? 13:40-14:00 Lenovo - Eking out performance using MROT 14:00-14:15 What is new in Storage Scale System? 14:15-14:45 Break 14:45-15:25 Lightning talks with selected product updates by Starfish Storage and IBM 15:25-15:45 Sycomp - Storage Scale and Storage Scale System in a hybrid cloud world 15:45-16:15 Guardant Health - Experiences of Using Storage Scale to Create a Wide-area Single Namespace Guardant Health - Network Quality of Service and On-Demand Bandwidth Provisioning Using File System Events 16:15-16:35 Short Break 16:35-16:55 Storage Scale on IBM Cloud - Advanced features and performance update 16:55-17:15 University of Queensland - High performance S3 access with IBM Storage Scale, ECE and IBM Storage Fusion 17:15-17:45 Performance Update 17:45-18:00 Wrap-up 18:00-20:00 Get together Ulf Troppens Product Manager - IBM Storage for Data and AI, Data-Intensive Workflows IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Gregor Pillen / Gesch?ftsf?hrung: David Faller Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 -------------- next part -------------- An HTML attachment was scrubbed... URL: From shaof777 at gmail.com Tue Oct 31 09:24:31 2023 From: shaof777 at gmail.com (shao feng) Date: Tue, 31 Oct 2023 17:24:31 +0800 Subject: [gpfsug-discuss] management API gives stale data? Message-ID: Hello, I have a three nodes cluster with all node have GUI installed. Sometimes I see management Rest API gives stale data. For example, after successfully creating a fileset thru Rest API on node2, I can see the new fileset in response of listing filesets request(/scalemgmt/v2/filesystems/fs1/filesets. However, if I issue the same request to node1, the response does not contain the new fileset. Seems restarting gpfsgui on node1 will let Rest API on node1 give correct data. But my question is: why stale data returned, can some configuration change avoid stale data? Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: