Skip to content

lipwig (formerly emil)

The cluster btrzx1 (lipwig) went into operation in August 2020 and was rebuild in dez 2025 / jan 2026. Besides two login nodes, it consists of 345 compute nodes which are connected by an InfiniBand network and a new Panasas file system. Each compute node provides two AMD Epyc Processors (2nd generation) with 16 cores each and 128 GB of main memory. Unlike the previous clusters, btrzx1 uses Slurm as resource manager instead of PBS/Torque. Moreover, the ITS file server (e.g., the ITS home directory) is not mounted on the cluster for performance reasons but every users has a separate home directory which lies on the Panasas file system.

Acknowledging lipwig / Publications

As with other DFG-funded projects, results must be made available to the general public in an appropriate manner. The publications must contain a reference to the DFG funding (so-called “Funding Acknowledgement”) in the language of the publication, stating the project number.

Whenever the lipwig has been used to produce results used in a publication or posters, we kindly request citing the service in the acknowledgements:

Calculations were performed using the lipwig-cluster of the 
Bayreuth Centre for High Performance Computing (https://www.bzhpc.uni-bayreuth.de),
funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 422127126.
Whereby the funding acknowledgement is mandatory.

Login node

  • lipwig.hpc.uni-bayreuth.de

compute nodes / partitions

  • 357 nodes, each:
    • 2x 16 core AMD Epyc 7302 @ 2GHz
    • 2x 64GB RAM
    • local /tmp ~500GB
  • 24h walltime

Network

  • InfiniBand (56 Gbit/s)
  • 2-level Fat Tree (Blocking factor 2)

File Systems

  • Panasas file system (/scratch & /home)
  • Local disk (/tmp, 500 GByte)

Each user has a home space which is different from the myfiles-home(!), with a maximum file space from 128GB whereas /scratch has no such per-user limit, but therefor has neither backup nor snapshots.

Resource Manager

Slurm 25.05

OS

Rocky 9.6