Smile Politely

Checking out the new Delta supercomputing system

In December 2021, the Blue Waters supercomputing system at the National Center for Supercomputing Applications powered down after 10 years of service. Now, with an award from the National Science Foundation, a new system is taking it’s place.

When it launches, the Delta system will be “the most performant GPU computing resource in NSF’s portfolio, making it a prime destination for advanced scientific research.”

After seeing the system up close, at its home in the National Petascale Computing Facility at the corner of Oak Street and St. Mary’s Road, I reached out to Tim Boerner; Co-Principal Investigator and Deputy Project Director, for a little insight into the signifcance of this launch. 

Smile Politely: Who are the leads on this project?

Tim Boerner: There are several lead roles on the Delta award:

William Gropp; Principal Investigator (PI) and Project Director; Leads the overall project and is ultimately accountable to the NSF for its success. Gropp is also NCSA’s Director.

Greg Bauer; Co-Principal Investigator (Co-PI); lead for researcher (user) support for Delta.

Brett Bode; Co-PI; lead for system deployment and operations.

Tim Boerner; Co-PI and Deputy Project Director; lead for Delta project office (i.e., responsible for conduct of the project/award on a day-to-day basis).

Laura Herriott; no formal role on the NSF award itself; leads allocations effort for researchers requesting time on Delta and leads/represents NCSA’s User Services directorate in support of Delta.

Amy Schuele; Co-PI; leads Delta’s efforts for Accessibility (i.e., how usable the system is by individuals with visual or other impairments) and leads/represents NCSA’s Integrated Cyberinfrastructure (ICI) directorate in support of Delta. Note: ICI is the part of NCSA that houses our technical teams that operate resources like Delta.


Photo by Jorge Murga.

SP: Can you talk about the inception of Delta…the plan to launch a new system and how long it took to get it up and running?

Boerner: The award of a system like Delta is part of a broader program at the National Science Foundation. In the case of Delta, that is the Advanced Computing Systems & Services program. The NSF will put out solicitations for proposals to deploy and operate a large computing resource. This sparks centers across the country (like NCSA) to respond with a proposal for the kind of computing and data resource that would best benefit the nation and its researchers.


Photo by Jorge Murga.

Winning a system like Delta it is about more than just how “powerful” the computing system is. The NSF also looks at other ways you will bring value to the nation and how well-suited the organization is for operating a resource like the one being proposed.

When we started planning the proposal that eventually resulted in the Delta system, we identified 3 key pillars that we saw as important areas in which to focus: (1) advancing the continued adoption of graphics processors (“GPUs”) in the use of scientific computing, (2) demonstrating the value of using modern file systems that will be more stable and performant for large computing workloads, and (3) advancing the usability and accessibility of resources like Delta. There has been work in each of these areas, but we wanted to continue to pursue and advance change in these areas; and we captured that vision and sense of change for the future in the name. When you see the name Delta: think about the Greek letter delta, which is often used to represent change in scientific equations. As the concept became solidified, we start to design and budget for the resource itself. Then we write. And then we wait to hear back from the NSF. Sometimes it can be 4-6 months.

Photo by Jorge Murga.

That’s all in the planning. Once you get the award, the real work starts. Revising and updating system designs, specifications, and costs to account for unforeseen changes between when the proposal was written and the current state of technology. As the hardware is finalized and ordered, the project effort that goes around that hardware gets ramped up. Teams from across NCSA come together to provide a variety of expertise that makes a system like Delta possible: research application specialists; user support specialists; systems, network, and storage engineers; project managers; allocations managers; technical and project leads. Everyone has a contribution to make and it all gets coordinated and implemented. You can think of it like watching a orchestra preparing to play a symphony. Right now, with Delta, the curtain is about to go up for our opening night.


Photo by Jorge Murga.

SP: If you were speaking to someone (someone like me who is not familiar with language surrounding supercomputing) about Delta and how it will be used, how would you describe its purpose? Can you give a specific example of who is using it and for what?

Boerner: Often researchers will pose big questions that can only be answered with a lot of computing work. Delta will be used by researchers across the nation to help them answer these big questions that come up in their research. It will do this with its substantial computing power and ability to move large amounts of data around inside the system. A researcher might want to simulate what happens in certain situations down to the molecular level, say to better understand RNA.

One method of computational research that has seen explosive growth in recent years is artificial intelligence (AI). Specifically, there are some techniques within the field of AI that are called “machine learning” and “deep learning”. A system like Delta, with its large number of graphics processors, excels at this kind of work. Some examples in this area of researcher use cases we identified early on in our planning include: computational archaeology where it can advance understanding of historical, climate-driven events; bioengineering where it will make new types of imaging techniques possible; multi-messenger astrophysics where it will be able to identify and characterize gravitational wave sources in real-time; and natural language processing where researchers are looking to improve healthcare by analyzing doctor-patient conversations to develop methods of summarizing the content of those conversations for the patient to take home from a doctor’s visit.

SP: How do its capabilities compare to those of the Blue Waters system?

Boerner: This is a tricky comparison to make as the two systems were designed with a different user community in mind and with vastly different budgets. Where Blue Waters was designed to support a fewer number of very large and complex research workloads, Delta is designed to support a much larger number of more typical research workloads. That said, the performance improvements year-over-year in computing technology means that the much smaller Delta packs a significant punch for its size. There are some types of work, like those using AI methods, where Delta will really shine compared to any system that the NSF has out there today.

Top photo by Jorge Murga.

Staff writer

Related Articles