By Bryan Hay

Information Technology Services has been working to expand research and high-performance computing (HPC) at Lafayette and recently completed an expansion of hardware resources that are now available campus wide to aid in research.

Hardware investments have opened up vast resources for students and faculty to conduct research across all disciplines and significantly increase the computing power available to the college community.

“It’s been our goal to not only enhance research resources at Lafayette through high-performance computing but to transform them,” says Peter Goode, research computing systems administrator.

Goode and Jason Simms, instructor for foreign languages and literature and manager of research and high-performance computing, recently discussed the upgrades to Lafayette’s research computing system and what lies ahead.

Jason Simms, instructor for foreign languages and literature and manager of research and high-performance computing,

Jason Simms in the High Performance Computing server room

What is high-performance computing (HPC) and how does it assist faculty and students?

PG: HPC essentially takes multiple, interconnected computers (nodes) and applies them to the solving of complex calculations or to analyze large amounts of data. The computers are able to work together in parallel to provide researchers with far more computational resources in the execution of their research goals than a conventional computer can provide. Lafayette’s HPC resources enable faculty and students with research projects to accomplish their processing tasks in hours or days that might otherwise take weeks or months.

JS: Fundamentally, HPC facilitates research that would otherwise be challenging or impossible to complete using a standard personal computer. Many analyses require the CPU and memory resources of several personal computers running continuously for many days, or perhaps they require far greater amounts of storage—hundreds of gigabytes or even multiple terabytes—than are available on a single computer. The HPC environment also enables multiple such jobs to run concurrently in parallel, rather than having to wait for one to complete before starting the next one, which can make research much more efficient. Our primary goal is to encourage users to reconsider what is even possible in their research and courses, and then to support those efforts successfully.

What are some of the recent HPC investments?

PG: The Research and HPC team with the assistance of Pete Hoernle, manager of IT Infrastructure, recently completed an expansion of the College’s HPC resources, collectively known as the HPC cluster. Originally installed in September 2019, the expansion completed in recent weeks added two ‘compute’ nodes to complement the four already in service, a storage node providing some 340TB (terabytes) of capacity that complements the existing 40TB, and a Graphics Processing Unit (GPU) node housing two NVIDIA Quadro RTX 8000 GPUs. The GPU node adds both a new capability for researchers that did not previously exist at the College and provides them with performance that is an order of magnitude greater than the standard compute nodes can offer for specialized applications.

How do these improvements enhance research?

PG: These improvements broaden the capacity of Lafayette College to accommodate faculty and student-driven research projects that necessitate the completion of complex calculations or analyses of very large datasets. Research does not alway appear as though it might benefit from an HPC facility, but in just several months, we’ve already been able to accommodate the research objectives of a broad array of disciplines from across the College, clocking in excess of 20 years worth of CPU time. We were also limited by the existing 40TB of available storage. That may seem large, but in the world of research computing, the associated datasets and the results from iterative complex calculations can quickly make it look rather small. Hence, a priority of the expansion project was to significantly expand our storage capability.

JS: Adding these new resources provides three main benefits. First, the additional compute nodes allow more jobs to run simultaneously, making research more efficient. Second, the expanded storage capacity means we can host a greater number of, and larger size, datasets, opening up new research opportunities. And third, the new GPU node offers optimized computation for specialized applications that we previously did not offer.

How might a faculty member or student in engineering or the sciences use the system as compared to counterparts in the arts and humanities?

JS: When most people think about HPC, STEM and other quantitative applications typically come to mind. We have, for example, facilitated projects in molecular dynamics, genomics, mathematics, and physics. Beyond this, however, HPC is increasingly being deployed within social sciences and humanities, as well as for qualitative projects. At Lafayette, we have supported analysis of campaign finance data spanning multiple decades, as well as ongoing analysis of huge amounts of Twitter data related to COVID-19. For both STEM and non-STEM users, a typical benefit of HPC is the ability to work with larger amounts of data at once, whether genomic datasets, the entire corpora of classical literature, or hundreds of thousands of images all at once. We hope our HPC users, regardless of their discipline, will dream big when thinking about what kinds of research might be within reach.

What is a “research computing cluster” and what kinds of research does that support?

PG: A computing cluster in the context of research computing is a collection of individual computers, or nodes. Each node forms a part of the overall cluster, all linked together using an extremely fast network known as InfiniBand. Each node houses multiple processors, each of which has multiple cores (as many as 26 per processor), all of which can leverage large volumes of computer memory and storage. Effectively, the same components as any standard computer, but on serious steroids.

Thus, a research computing cluster is specifically designed to run processor, memory, and data- intensive tasks that a conventional desktop or laptop would be unable to either accomplish at all, or would otherwise take an impractical period of time to complete. This concept is known as parallel computing, the secret to delivering true high performance.

There are really no limits to the applications of HPC for research. Any discipline may benefit from such a facility where either calculations or datasets are beyond the reasonable capacity of a personal computer. While that might sound like the realm of STEM, the arts and humanities can benefit, for example in analyzing vast amounts of text, language, or images with the relevant software application.

How do faculty and students access the College’s HPC resources and learn how to use them?

PG: Any Lafayette faculty member or student can request access to the HPC cluster by sending an email with the details of their request to help@lafayette.edu.

JS: We’ll want to discuss your intended use cases with you to ensure that we are able to support them and to ensure you have access to the resources you need. We also offer training and documentation on various aspects of the cluster, but the nature of most HPC research means that the workflow for one user may be rather different from someone else, so we provide customized support when possible.

Is there any cost to use the HPC resources?

PG: The HPC cluster is available to the Lafayette community for research purposes through the ITS Division at no cost to the researcher or their department. Researchers with sources of funding, such as grant monies, are able to procure additional resources to which they would receive priority access. When those resources lay unused, however, they are made available to the broader community as a shared resource.

What are your plans with respect to our peer institutions?

JS: We are just beginning to embark on multiple collaborative projects with peer and non-peer institutions, such as Lehigh, Villanova, Swarthmore, and Rutgers, that will facilitate enhanced collaboration and research opportunities. One project may result in a regional science network, enabling us to share data and research with other institutions more easily, including institutions historically without access to HPC resources, such as community colleges and even high schools. Lafayette is already in a leadership position on this and other projects and is well positioned with respect to our peers in terms of HPC capacity and usage. We’re excited to see how these efforts unfold over the coming months!

 

Categorized in: Featured News, Information Technology Services, ITS, News and Features, Research
Tagged with: , ,