
Advanced Computing
The IDSC Advanced Computing team designs and analyzes high-performance computing (HPC) systems for big-data-system users from a variety of application areas. The team has expertise in developing distributed computing, software, and databases. They are also exploring and developing new parallel-computing paradigms and architectures for researchers who need to process, store, retrieve, analyze, and understand massive data sets, where computation and storage breakthroughs are essential. The team strives for synergy and interaction with all classes of research.
Over the past decade, the University’s Advanced Computing platform has grown the HPC cyberinfrastructure from nothing to a regional advanced-computing environment supporting 500+ users and a large state-of-the-art high-performance storage system (over 10 PB). The U’s latest acquisition, Triton, a state-of-art IBM power9 system was rated one of the Top 5 Academic Institution Supercomputers in the US for 2019, and is UM’s first GPU-accelerated high-performance computing (HPC) system, representing a completely new approach to computational and data science. Built using IBM Power Systems AC922 servers, this system was designed to maximize data movement between the IBM POWER9 CPU and attached accelerators like GPUs. Triton is designed to accommodate traditional HPC, interactive Data Science, Big Data AI, and Machine Learning workloads. This represents a quantum leap in the University’s computing infrastructure, and it is designed to address the ever-expanding needs of data-driven research. The University’s first Supercomputer Pegasus, an IBM IDataPlex system, was ranked at number 389 on the November 2012 Top 500 Supercomputer Sites list. Pegasus provided over 90 million CPU/ hours/year to research projects, with a utilization rate near 85%. Beginning in fall 2022, Pegasus now offers researchers from the University of Miami Sylvester Comprehensive Cancer Center new computational and storage resources. The Pegasus cluster has been augmented with an additional 1024 cores, 6TB of memory and 1PB of high-performance IBM ESS storage. Each additional compute node is equipped with two Intel Xeon scalable 32-core 3.0GHz processors and 256GB DDR4 3200Mhz memory. These additional resources are in response to the growing demand for computational and storage resources at Sylvester and across the University. AI and Data Science Capabilities
|
Quick Links
Contact Us |
Advanced Computing Team
![]() |
Ravi Vadapalli | DirectorResearch Associate Professor, Dept. of Electrical and Computer Engineering, UM College of Engineering Dr. Vadapalli joined IDSC after serving as Program Director for the Center for Agile and Adaptive Additive Manufacturing and Senior Director for IT Support at the University of North Texas (UNT) in Denton. Prior to UNT, Dr. Vadapalli was a Senior Research Scientist at the High-Performance Computing Center at Texas Tech University (TTU) in Lubbock. In those roles, he helped secure nearly $3 million dollars in external funding, more than $40 million in grant proposals, and more than $230 million in-kind grants for skilled workforce training. One of Dr. Vadapalli’s priorities is advancing cancer care and research through machine learning (ML), artificial intelligence (AI), and powerful computer models. At TTU, he partnered with researchers at M.D. Anderson Cancer Center and Rice University to accelerate the development of patient treatment plans. Dr. Vadapalli is also interested in applying advanced computing tools to develop sophisticated climate models involving ocean and atmospheric conditions. Dr. Vadapalli holds a Doctorate in Nuclear Physics from Andhra University in India, and a Master’s degree in Computational Engineering from Mississippi State. He and his wife have two sons. Read the IDSC Magazine article welcoming Dr. Vadapalli. |
![]() |
Warner Baringer | Assistant DirectorPrior to joining the Frost Institute for Data Science and Computing (formerly the “Center for Computational Science”), Warner was a senior research associate in the Rosenstiel School of Marine and Atmospheric Science’s Division of Meteorology and Physical Oceanography where he managed the remote sensing computer lab and implemented distributed, high-performance systems capable of storing and processing decades of remotely sensed data. Warner supports all C, C++ and Perl programming for the core as well as being responsible for all parallel file systems (GPFS, GFS, GFFS XSEDE Pilot project). He has started porting several codes to Phi including two satellite-mapping programs for NASA. He is a graduate of Tulane University in New Orleans, LA. |






