HPC-Logo-mid

Explore Temple’s High Performance Computing team and resources.

COMPUTE

Interactive High-Performance Computing Servers

Owl’s Nest is the latest Linux cluster for high-performance computing. It was procured and assembed in 2017.

It features 180x new dual-socket compute nodes with 28 cores and 128GB of RAM each. Research with larger memory requirements will benefit from 6x 512GB, 6x 1.5TB and 2x 3TB RAM machines. In addition, each 512GB box also hosts two NVIDIA P100 GPUs. An EDR InfiniBand (100Gb/s) fabric is used as interconnect.

In November 2018 the cluster was further extended with 48 more dual-socket compute nodes with 16 cores and 96GB of RAM each.

In total, Owl’s Nest currently hosts 6,464 CPU cores, providing about 57 million service units (CPU core hours) per year.

All of this is backed by a new 1.5PB parallel storage which hosts all user data and is shared across the entire cluster. For large public (read-only) data sets, there is an additional 0.5PB storage.

MACHINE LEARNING

Dedicated GPU servers for machine learning

The HPC team operates two GPU servers for intense GPU computing. These servers can be used to run optimized software stacks for neural networks and deep learning using Singularity containers. Users can run interactively and use containers for popular software packages like TensorFlow, Caffe2, PyTorch, and many more.

frameworks

OWL'S NEST

High-Performance Computing cluster

Owl’s Nest is the latest Linux cluster for high-performance computing. It was procured and assembed in 2017.

It features 180x new dual-socket compute nodes with 28 cores and 128GB of RAM each. Research with larger memory requirements will benefit from 6x 512GB, 6x 1.5TB and 2x 3TB RAM machines. In addition, each 512GB box also hosts two NVIDIA P100 GPUs. An EDR InfiniBand (100Gb/s) fabric is used as interconnect.

In November 2018 the cluster was further extended with 48 more dual-socket compute nodes with 16 cores and 96GB of RAM each.

In total, Owl’s Nest currently hosts 6,464 CPU cores, providing about 57 million service units (CPU core hours) per year.

All of this is backed by a new 1.5PB parallel storage which hosts all user data and is shared across the entire cluster. For large public (read-only) data sets, there is an additional 0.5PB storage.

The High-Performance Computing (HPC) Team consists of two full-time staff and research faculty at the College of Science and Technology. We operate over 400 servers of shared HPC resources on campus.

Michael L Klein
Michael L. Klein

Dean and Laura H. Carnell Professor of Science

Axel Kohlmeyer
Axel Kohlmeyer

Associate Dean

enzo profile photo
Vincenzo Carnevale

Associate Professor