As of July 2024, the CNF Cluster has been taken offline for upgrades.

Use of the codes installed on the CNF cluster is on a self-support basis.

For training on navigating UNIX/LINUX and on using the batch system for the cluster, please contact CNF Computing staff.


The Nanolab cluster provides users of the CNF the opportunity to use a wide range of modeling software tailored for nanoscale systems. Several classes of nodes, all linked via Gigabit Ethernet, are available on the cluster. The cluster runs Scientific Linux 7 with OpenHPC. Slurm is the batch job queuing system.

Potential users of the cluster will apply for access via the normal CNF user and project application process

Cluster charges are based on accessing the cluster and running jobs on the cluster – a single monthly "all you can eat" fee is charged for each month during which the cluster is used.

All user processes must be run as cluster jobs in the cluster job queuing system.

Head Node:

The head node is a Dell PowerEdge R740xd with approximately 9 TeraBytes of shared disk space available to users. Via NFS, user home directories are shared out to all of the compute nodes.

dn Compute Nodes:

This class comprises two Dell PowerEdge R640 systems. Each node has 256GB of memory and 2 Intel Xeon Gold 6136 processors. Each individual processor has 12 cores and 2 threads per core. This equals 48 virtual CPUs per node.

rb2u Compute Nodes:

Eleven of the twelve nodes in this class are online (one hardware failure). Each node has 32GB of memory and 2 Intel Xeon E5-2620 processors. Each individual processor has 6 cores and 2 threads per core. This equals 24 virtual CPUs per node.

rb1u Compute Nodes:

Six of these nodes are available. We are working to provision the rest once a new network switch is installed. Each node has 24GB of memory and 2 Intel Xeon DP E56020 processors. Each individual processor has 4 cores and 2 threads per core. This equals 16 virtual CPUs per node.

Network Switches:

Two Dell S3148 network switches are installed to connect all the nodes. Each node is connected to the switch via Gigabit Ethernet.

Scientific Codes on the Cluster:

If you don't see a code listed below, please let us know. We will do our best to install new codes upon your request. We will also install commercial codes where you Bring Your Own License.

MPI Families

OpenMPI Open MPI Project is an open source  Message Passing Interface  implementation that is developed and maintained by a consortium of academic, research, and industry partners.
MPICHhttp://www.mpich.orgMPICH is a high-performance and widely portable implementation of the Message Passing Interface (MPI) standard (MPI-1, MPI-2 and MPI-3). 

Compiler Families

GNUhttp://gcc.gnu.orgGNU C compiler and support files.
LLVMhttp://llvm.orgLLVM compiler infrastructure.


SciPyhttp://www.scipy.orgSciPy is a Python-based ecosystem of open-source software for mathematics, science, and engineering
NumPyhttp://www.numpy.orgBase N-dimensional array package
pyspread is a non-traditional spreadsheet application that is based on and written in the programming language Python
module load py3.6-numpy
module load pyspread

IO Libraries


The Adaptable IO System (ADIOS).


A general purpose library and file format for storing scientific data.

NetCDF, C, and C++ libraries for the Unidata network Common Data

A Parallel NetCDF library


Scalable I/O Library for Parallel Access to Task-Local Files.

Serial/Threaded Libraries


R is a language and environment for statistical computing and graphics

GSL library provides a wide range of mathematical routines such as random number generators, special functions and least-squares fitting. There are over 1000 functions in total.

Serial Graph Partitioning and Fill-reducing Matrix Ordering.


An optimized BLAS library based on GotoBLAS2.


Parallel Linear Algebra Software for Multicore Architectures.


Graph, mesh and hypergraph partitioning library.


A general purpose library for the direct solution of linear equations.

Parallel Libraries


Free peer-reviewed portable C++ source libraries.


Fast Fourier Transform library.


Scalable algorithms for solving linear systems of equations.


Lightweight, general, scalable C++ library for finite element methods.


A MUltifrontal Massively Parallel Sparse direct Solver.


Portable Extensible Toolkit for Scientific Computation.


Graph, mesh and hypergraph partitioning library using MPI.

SLEPC http://slepc.upv.esA library for solving large scale sparse eigenvalue problems.
SuperLU general purpose library for the direct solution of linear equations.
Trilinos object-oriented software framework for the solution of large-scale, complex multi-physics engineering and scientific problems. 

Capacitance Extraction/Field Solving

FasterCap powerful three- and two-dimensional capactiance extraction program
/usr/local/fastercap/FasterCap_6.0.7/FasterCap [-b]

Density Functional Approaches


ABINIT is a software suite to calculate the optical, mechanical, vibrational, and other observable properties of materials. Starting from the quantum equations of density functional theory, you can build up to advanced applications with perturbation theories based on DFT, and many-body Green's functions (GW and DMFT) .
ABINIT can calculate molecules, nanostructures and solids with any chemical composition, and comes with several complete and robust tables of atomic potentials.

module load lapack
module load blas
module load abinit

Device, Process, Particle, and Finite Element Method Simulation

Archimedes belongs to the well-known family of TCAD software, i.e. tools utilized to assist the development of technologically relevant products. In particular, this package assists engineers in designing and simulating submicron and mesoscopic semiconductor devices.
Rappture is the GUI for Archimedes.
module load rappture

DEVSIM is semiconductor device simulation software which uses the finite volume method. It solves partial differential equations on a mesh. The Python interface allows users to specify their own equations.

module load devsim
source /usr/local/intel/mkl/bin/ intel64

Elmer is an open source multiphysical simulation software mainly developed by CSC - IT Center for Science (CSC). Elmer development was started 1995 in collaboration with Finnish Universities, research institutes and industry. After it's open source publication in 2005, the use and development of Elmer has become international.

Elmer includes physical models of fluid dynamics, structural mechanics, electromagnetics, heat transfer and acoustics, for example. These are described by partial differential equations which Elmer solves by the Finite Element Method (FEM).

module load elmer

HOOMD-blue is a  general-purpose  particle simulation toolkit. It scales from a single CPU to thousands.
module load singularity
# Use this package for non-mpi jobs
singularity exec /usr/local/glotzerlab/glotzerlab-software-master-latest.simg [command-in-glotzerlab-package]
# Use this package for MPI jobs
singularity exec /usr/local/glotzerlab/glotzerlab-software-mpi-flux.simg [command-in-glotzerlab-package]
OpenFOAM v6 and v8
https://openfoam.comA Computational Fluid Dynamics (CFD) package. OpenFOAM has an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.
source /usr/local/openFOAM/OpenFOAM-6/etc/bashrc # for version 6
source /usr/local/openFOAM/OpenFOAM-8/etc/bashrc # for version 8
mkdir -p $FOAM_RUN

Linear Algebra Operations

LAPACK is written in Fortran 90 and provides routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular value problems. 
module load lapack

ScaLAPACK is a library of high-performance linear algebra routines for parallel distributed memory machines. ScaLAPACK solves dense and banded linear systems, least squares problems, eigenvalue problems, and singular value problems.

module load scalapack

Math Libraries

Intel Math Kernel Library (MKL)

Intel Math Kernel Library (Intel MKL) is a library of optimized math routines for science, engineering, and financial applications. Core math functions include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier transforms, and vector math. The routines in MKL are hand-optimized specifically for Intel processors.

source /usr/local/intel/mkl/bin/ intel64

Matrix and Vector Operations

BLAS BLAS (Basic Linear Algebra Subprograms) are routines that provide standard building blocks for performing basic vector and matrix operations.
(OpenBLAS is also available... see above)
module load blas

Tomographic Data Processing and Image Reconstruction


Tomographic reconstruction creates three-dimensional views of an object by combining two-dimensional images taken from multiple directions, for example, this is how a CAT (computer-aided tomography) scanner generates 3D views of the heart or brain.

Included is the tomopy-cli commandline interface.

module load miniconda
conda activate tomopy
source /usr/local/miniconda3/envs/tomopy/bin/

Cluster HowTos:

  • No labels