San Diego Supercomputer Center (photo courtesy of Alan Decker)
I recently had the opportunity to sit down with Richard Moore, deputy director of the San Diego Supercomputer Center (SDSC) at the University of California, San Diego, and Rick Wagner, SDSC’s high-performance computing systems manager. They discussed their views on high-performance and technical computing, and we also chatted about Comet, a new petascale supercomputer which leverages 27 racks of PowerEdge C6320, totaling 1,944 nodes or 46,656 cores, representing a whopping five-fold increase in compute capacity versus SDSC’s previous system.
Ravi Pendekanti: Thank you for taking time with me today. To start things off, I’d be interested in understanding how you are seeing high-performance computing evolving.
Richard Moore: High-performance computing is becoming increasingly more integral to research across a diverse set of disciplines. We believe that this trend will continue and we want to meet and stay ahead of researchers’ needs.
An example of this would be the work we have done with “Science Gateways,” an idea initiated about eight years ago to develop portals to enable people to access the HPC systems through an easy-to-use interface. HPC has very complicated codes; but our gateway users don’t need to know, or even understand the complexities of the command line interface because we were able to make it more user friendly.
Rick Wagner: Another area of growth in computational science we’re seeing is in new communities, not just the traditional ones like physics, chemistry, or seismology. The growth is coming from across all fields, such as sociology and finance. There is a continuum of job sizes from small to large, and the number of users and jobs are growing very quickly.
RP: With growth comes additional pain points. Is that what you were trying to overcome with Comet?
RM: Exactly. With Comet, accessibility was top of mind. We like to say that Comet provides ‘HPC for the 99 percent’ –it’s about giving access to a much larger research community and serving as a gateway to discovery. In order to provide that, we needed to start with a solid hardware foundation. We chose the Dell PowerEdge C6320s over competitive solutions because of Dell’s reputation in the HPC space, its leading hardware design and innovations, and for its ease of deployment.
RP: How are things going with the project?
RM: I’m pleased to report that Comet is now in production. We were able to get friendly users onto the system less than two weeks after the hardware hit the floor, and we’re now fully operational.
RP: So, let’s go back, if you will, to address how the decision was made to engage with Dell on this project.
RW: We hadn’t had a large deployment with Dell in the past and with this project we knew that reliability was a huge factor. Working with Dell resonated with us. Last summer, we flew out to the Tennessee integration facility to see their system get assembled. During that trip, we met with and got to know Dell engineers. We had even more confidence that we were working with the right partner after that.
RM: To say that we were delighted with the ease of deployment and ongoing support would be an understatement. The system went up quickly and we’ve been impressed with the design, reliability and compute performance. We’re thrilled to be working with Dell as we expand access to researchers.
RP: What a positive way to end the conversation. Thanks to you both for meeting with me today, it’s been a pleasure.