You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »

Use of the codes installed on the CNF cluster is on a self-support basis.

For training on navigating UNIX/LINUX and on using the batch system for the cluster, please contact CNF Computing staff.

Overview:

The Nanolab cluster provides users of the CNF the opportunity to use a wide range of modeling software tailored for nanoscale systems. Several classes of nodes, all linked via Gigabit Ethernet, are available on the cluster. The cluster runs Scientific Linux 7 with OpenHPC. Slurm is the batch job queuing system.

Potential users of the cluster will apply for access via the normal CNF user and project application process

Cluster charges are based on accessing the cluster and running jobs on the cluster – a single monthly "all you can eat" fee is charged for each month during which the cluster is used.

Head Node:

The head node is a Dell PowerEdge R740xd with approximately 9 TeraBytes of shared disk space available to users. Via NFS, user home directories are shared out to all of the compute nodes.

dn Compute Nodes:

This class comprises two Dell PowerEdge R640 systems. Each node has 256GB of memory and 2 Intel Xeon Gold 6136 processors. Each individual processor has 12 cores and 2 threads per core. This equals 48 virtual CPUs per node.

rb2u Compute Nodes:

At present, three of the twelve nodes in this class are online. We are working to provision the rest. Each node has 32GB of memory and 2 Intel Xeon E5-2620 processors. Each individual processor has 6 cores and 2 threads per core. This equals 24 virtual CPUs per node.

rb1u Compute Nodes:

None of these nodes are yet available. We are working to provision them once a new network switch is installed.

Scientific Codes on the Cluster:

MPI Families

OpenMPIhttp://www.open-mpi.orgThe Open MPI Project is an open source  Message Passing Interface  implementation that is developed and maintained by a consortium of academic, research, and industry partners.
MPICHhttp://www.mpich.orgMPICH is a high-performance and widely portable implementation of the Message Passing Interface (MPI) standard (MPI-1, MPI-2 and MPI-3). 

Compiler Families

GNUhttp://gcc.gnu.orgGNU C compiler and support files.
LLVMhttp://llvm.orgLLVM compiler infrastructure.

Python

SciPyhttp://www.scipy.orgSciPy is a Python-based ecosystem of open-source software for mathematics, science, and engineering
NumPyhttp://www.numpy.orgBase N-dimensional array package

IO Libraries

Adioshttp://www.olcf.ornl.gov/center-projects/adios

The Adaptable IO System (ADIOS).

HDF5/PHDF5http://www.hdfgroup.org/HDF5

A general purpose library and file format for storing scientific data.

NetCDFhttp://www.unidata.ucar.edu/software/netcdfFortran, C, and C++ libraries for the Unidata network Common Data
PnetCDFhttp://cucis.ece.northwestern.edu/projects/PnetCDF

A Parallel NetCDF library

ScionLibhttp://www.fz-juelich.de/ias/jsc/EN/Expertise/Support/Software/SIONlib/_node.html

Scalable I/O Library for Parallel Access to Task-Local Files.

Serial/Threaded Libraries

Rhttp://www.r-project.org

R is a language and environment for statistical computing and graphics

GSLhttp://www.gnu.org/software/gslThe library provides a wide range of mathematical routines such as random number generators, special functions and least-squares fitting. There are over 1000 functions in total.
Metishttp://glaros.dtc.umn.edu/gkhome/metis/metis/overview

Serial Graph Partitioning and Fill-reducing Matrix Ordering.

OpenBLAShttp://www.openblas.net

An optimized BLAS library based on GotoBLAS2.

Plasmahttps://bitbucket.org/icl/plasma

Parallel Linear Algebra Software for Multicore Architectures.

Scotchhttp://www.labri.fr/perso/pelegrin/scotch

Graph, mesh and hypergraph partitioning library.

SuperLUhttp://crd.lbl.gov/~xiaoye/SuperLU

A general purpose library for the direct solution of linear equations.

 

Cluster HowTos:

  • Login to the Cluster
  • Loading Environments for Using Codes
  • Submitting Jobs to the Cluster
  • No labels