Skip to Content.
Sympa Menu

nrp-2019-09 - [NRP-2019-09] All the cloud GPUs for Astrophysics - Jan 27 - 1pm ET

Subject: Third National Research Platform Workshop registrants

List archive

[NRP-2019-09] All the cloud GPUs for Astrophysics - Jan 27 - 1pm ET


Chronological Thread 
  • From: Dana Brunson <>
  • To: "" <>, "" <>, "" <>, "" <>, "" <>, "" <>, "" <>, "" <>, VRDISCUSS-L <>
  • Subject: [NRP-2019-09] All the cloud GPUs for Astrophysics - Jan 27 - 1pm ET
  • Date: Sun, 26 Jan 2020 23:26:08 +0000

Reminder – see you all tomorrow (Monday):

 

Please join us for the  NRP Engagement webinar featuring

Igor Sfiligoi and Frank Würthwein presenting:

“Running a 380PFLOP32s GPU burst for Multi-Messenger Astrophysics with IceCube across all available GPUs in the Cloud”

Monday, January 27, 2020 at 1 ET - 12 CT - 11 MT - 10 PT

Zoom:  https://internet2.zoom.us/j/735965245 

Feel free to share this announcement with anyone that may be interested.  Calls will typically be on the fourth Monday of the month.  To hear about future activities, please join the NRP engagement email list via this link and  visit the wiki.

 

Running a 380PFLOP32s GPU burst for Multi-Messenger Astrophysics with IceCube across all available GPUs in the Cloud

Igor Sfiligoi and Frank Würthwein

The IceCube Neutrino Observatory is the National Science Foundations (NSF)’s premier facility to detect neutrinos with energies above approximately 10 GeV and a pillar for NSF’s Multi-Messenger Astrophysics (MMA) program, one of NSF’s 10 Big Ideas. The detector is located at the geographic South Pole and is designed to detect interactions of neutrinos of astrophysical origin by instrumenting over a gigaton of polar ice with 5160 optical sensors. The sensors are buried between 1450 and 2450 meters below the surface of the South Pole ice sheet. To understand the impact of ice properties on the incoming neutrino detection, and origin, photon propagation simulations on GPUs are used. We report on a few hour GPU burst across Amazon Web Services, Microsoft Azure, and Google Cloud Platform that harvested all available for sale GPUs across the three cloud providers the weekend before SC19, reaching over 51k GPUs total and 380 PFLOP32s. GPU types span the full range of generations from the NVIDIA GRID K520 to the most modern NVIDIA T4 and V100. We report the scale and science performance achieved across all the various GPU types, as well as the science motivation to do so.

Igor Sfiligoi is Lead Scientific Software Developer and Researcher at UCSD/SDSC.  He has been active in distributed computing for over 20 years. He has started in real-time systems, moved to local clusters, worked with leadership HPC systems, but spent most of his career in computing spanning continents. For about 10 years, he has been working on one such world-wide system, called glideinWMS, which he brought from the design table to being de-facto standard for many scientific communities. He has recently moved his attention in supporting users on top of Kubernetes clusters and Cloud resources. He has a M.S. in Computer Science equivalent from Universita degli studi di Udine, Italy. He has presented at many workshops and conferences over the years, with several published papers.

Frank Würthwein is the Executive Director of the Open Science Grid, a national cyberinfrastructure to advance the sharing of resources, software, and knowledge, and a physics professor at UC San Diego. He received his Ph.D. from Cornell in 1995. After holding appointments at Caltech and MIT, he joined the UC San Diego faculty in 2003. His research focuses on experimental particle physics and distributed high-throughput computing. His primary physics interests lie in searching for new phenomena at the high energy frontier with the CMS detector at the Large Hadron Collider. His topics of interest include, but are not limited to, the search for dark matter, supersymmetry, and electroweak symmetry breaking. As an experimentalist, he is interested in instrumentation and data analysis. In the last few years, this meant developing, deploying, and now operating a worldwide distributed computing system for high-throughput computing with large data volumes. In 2010, "large" data volumes are measured in Petabytes. By 2025, they are expected to grow to Exabytes.

 

For info on this and past calls please visit the wiki.

 

Dana

 

Dana Brunson

Executive Director for Research Engagement

Internet2

 




Archive powered by MHonArc 2.6.19.

Top of Page