Use of the codes installed on the CNF cluster is on a self-support basis. For training on navigating UNIX/LINUX and on using the batch system for the cluster, please contact CNF Computing staff. |
Overview:
The Nanolab cluster provides users of the CNF the opportunity to use a wide range of modeling software tailored for nanoscale systems. Several classes of nodes, all linked via Gigabit Ethernet, are available on the cluster. The cluster runs Scientific Linux 7 with OpenHPC. Slurm is the batch job queuing system.
Potential users of the cluster will apply for access via the normal CNF user and project application process.
Cluster charges are based on accessing the cluster and running jobs on the cluster – a single monthly "all you can eat" fee is charged for each month during which the cluster is used.
Head Node:
The head node is a Dell PowerEdge R740xd with approximately 9 TeraBytes of shared disk space available to users. Via NFS, user home directories are shared out to all of the compute nodes.
dn Compute Nodes:
This class comprises two Dell PowerEdge R640 systems. Each node has 256GB of memory and 2 Intel Xeon Gold 6136 processors. Each individual processor has 12 cores and 2 threads per core. This equals 48 virtual CPUs per node.
rb2u Compute Nodes:
At present, three of the twelve nodes in this class are online. We are working to provision the rest. Each node has 32GB of memory and 2 Intel Xeon E5-2620 processors. Each individual processor has 6 cores and 2 threads per core. This equals 24 virtual CPUs per node.
rb1u Compute Nodes:
None of these nodes are yet available. We are working to provision them once a new network switch is installed.
Scientific Codes on the Cluster:
MPI Families
OpenMPI | http://www.open-mpi.org | The Open MPI Project is an open source Message Passing Interface implementation that is developed and maintained by a consortium of academic, research, and industry partners. |
MPICH | http://www.mpich.org | MPICH is a high-performance and widely portable implementation of the Message Passing Interface (MPI) standard (MPI-1, MPI-2 and MPI-3). |
Compiler Families
GNU | http://gcc.gnu.org | GNU C compiler and support files. |
LLVM | http://llvm.org | LLVM compiler infrastructure. |
Python
SciPy | http://www.scipy.org | SciPy is a Python-based ecosystem of open-source software for mathematics, science, and engineering |
NumPy | http://www.numpy.org | Base N-dimensional array package |
IO Libraries
Adios | http://www.olcf.ornl.gov/center-projects/adios | The Adaptable IO System (ADIOS). |
HDF5/PHDF5 | http://www.hdfgroup.org/HDF5 | A general purpose library and file format for storing scientific data. |
NetCDF | http://www.unidata.ucar.edu/software/netcdf | Fortran, C, and C++ libraries for the Unidata network Common Data |
PnetCDF | http://cucis.ece.northwestern.edu/projects/PnetCDF | A Parallel NetCDF library |
ScionLib | http://www.fz-juelich.de/ias/jsc/EN/Expertise/Support/Software/SIONlib/_node.html | Scalable I/O Library for Parallel Access to Task-Local Files. |
Serial/Threaded Libraries
R | http://www.r-project.org | R is a language and environment for statistical computing and graphics |
GSL | http://www.gnu.org/software/gsl | The library provides a wide range of mathematical routines such as random number generators, special functions and least-squares fitting. There are over 1000 functions in total. |
Metis | http://glaros.dtc.umn.edu/gkhome/metis/metis/overview | Serial Graph Partitioning and Fill-reducing Matrix Ordering. |
OpenBLAS | http://www.openblas.net | An optimized BLAS library based on GotoBLAS2. |
Plasma | https://bitbucket.org/icl/plasma | Parallel Linear Algebra Software for Multicore Architectures. |
Scotch | http://www.labri.fr/perso/pelegrin/scotch | Graph, mesh and hypergraph partitioning library. |
SuperLU | http://crd.lbl.gov/~xiaoye/SuperLU | A general purpose library for the direct solution of linear equations. |
Parallel Libraries
Boost | http://www.boost.org | Free peer-reviewed portable C++ source libraries. |
FFTW | http://www.fftw.org | Fast Fourier Transform library. |
Hypre | http://www.llnl.gov/casc/hypre | Scalable algorithms for solving linear systems of equations. |
MFEM | http://mfem.org | Lightweight, general, scalable C++ library for finite element methods. |
MUMPS | http://mumps.enseeiht.fr | A MUltifrontal Massively Parallel Sparse direct Solver. |
PETSC | http://www.mcs.anl.gov/petsc | Portable Extensible Toolkit for Scientific Computation. |
PT-SCOTCH | http://www.labri.fr/perso/pelegrin/scotch | Graph, mesh and hypergraph partitioning library using MPI. |
SLEPC | http://slepc.upv.es | A library for solving large scale sparse eigenvalue problems. |
SuperLU | http://crd.lbl.gov/~xiaoye/SuperLU | A general purpose library for the direct solution of linear equations. |
Trilinos | https://trilinos.org/ | An object-oriented software framework for the solution of large-scale, complex multi-physics engineering and scientific problems. |
Cluster HowTos:
- Login to the Cluster
- Loading Environments for Using Codes
- Submitting Jobs to the Cluster