One of the perks of my job is that it lands me at some really interesting conferences related to computing infrastructure. Most recently, I attended Advancing Research Computing on Campuses (ARCC): Best Practices Workshop – where I took notes, so you didn’t have to!
Held on the University of Illinois campus at the National Center for Supercomputing Applications (NCSA), the topics included best practices for operating and supporting a shared research-computing infrastructure, grid and cluster computing, storage and data management, and local to regional to national – and even to international – infrastructure collaboration between campuses – which offered some valuable insight into how distributed infrastructure can support the increasing demand for research computing resources.
On the technical side, I picked up a few tips on parallel R, the WOS storage solution for DDN (including NFS and OwnCloud gateways), configuration management with SALT – not to mention the architecture, design and production considerations of supercomputing clusters.
The workshop was also a great networking experience, as just under 100 dedicated research-computing and other IT professionals from all over the country shared ideas, experience and suggestions, waxing poetic over several meals and a lot of coffee on everything from current computer privacy issues to the tectonic shift coming with the touchscreen generation.
As a participant in the Campus Champions program, I was pleased to see my member organization, XSEDE, strongly represented. In addition, I was introduced to the Coalition for Academic Scientific Computation, which I’d never heard of but now plan to get more involved in.
In terms of the organizations represented by workshop attendees, Wharton Research Computing stood out as a small-yet-nimble, high-quality research-computing shop. The discussions on how to scale code beyond the desktop or departmental server made it clear to me that we are ahead of the game, primarily due to our centrally-funded research-computing cluster. But when I take my Wharton hat off and put my Penn hat on, I can see that we definitely have some work ahead of us when it comes to connecting the various islands of HPC / HTC on campus.
It also became clear that significant efforts are being put toward identifying “big fish” and bringing them into the national cyberinfrastructure. Our Wharton researchers don’t necessarily have a pressing need to run large parallel codes, but we can offer that capability with assistance from XSEDE allocation applications. By and large, our focus has been on getting people to the cloud instead. And, while cloud computing was notably absent from the overall conversation, I didn’t let that stop me from blowing away a few common preconceptions by elaborating on scripting clusters in Amazon AWS – especially with the cost benefits of spot pricing with bursty workloads. What can I say? We’re just way ahead of the pack on this, too – and I like that the cloud offers a workflow for existing Beowulf-like clustering and queues, in addition to providing a huge portfolio of new PaaS APIs. (This is of particular interest to me, as I see it as a key component in staying ahead of the pack and remaining relevant.)
In addition, I made it a point to find out how some of our peers fund research assistants and/or staff programmers tasked with research coding and optimization. The answers ranged from “we don’t do that” to “we have six full-time programmers just for research coding.” Funding-wise, many seem to follow the soft-money route, spending a significant amount of time chasing and securing grant dollars; others go directly to their VP of Research and petition for a paid position with fixed terms. One interesting idea I came across, though, was leveraging cross-school pollination with interested faculty – e.g., a comp-sci student coding alongside a biologist, giving the student “real world” experience.
In any event, I came away with the realization that the crux of the fiscal solution comes down to making a compelling case for central support of research computing, thereby strengthening the ability to draw in the best talent and bolstering grant applications with working examples of why lots of money should be thrown our way.
And that seems as good a note as any to end things on.
We should get on the CASC meetings. Good stuff!