You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »

Use of the codes installed on the CNF cluster is on a self-support basis.

For training on navigating UNIX/LINUX and on using the batch system for the cluster, please contact CNF Computing staff.

Overview:

The Nanolab cluster provides users of the CNF the opportunity to use a wide range of modeling software tailored for nanoscale systems. Several classes of nodes, all linked via Gigabit Ethernet, are available on the cluster. The cluster runs Scientific Linux 7 with OpenHPC. Slurm is the batch job queuing system.

Potential users of the cluster will apply for access via the normal CNF user and project application process

Cluster charges are based on accessing the cluster and running jobs on the cluster – a single monthly "all you can eat" fee is charged for each month during which the cluster is used.

Head Node:

The head node is a Dell PowerEdge R740xd with approximately 9 TeraBytes of shared disk space available to users. Via NFS, user home directories are shared out to all of the compute nodes.

dn Compute Nodes:

This class comprises two Dell PowerEdge R640 systems. Each node has 256GB of memory and 2 Intel Xeon Gold 6136 processors. Each individual processor has 12 cores and 2 threads per core. This equals 48 virtual CPUs per node.

rb2u Compute Nodes:

At present, three of the twelve nodes in this class are online. We are working to provision the rest. Each node has 32GB of memory and 2 Intel Xeon E5-2620 processors. Each individual processor has 6 cores and 2 threads per core. This equals 24 virtual CPUs per node.

rb1u Compute Nodes:

None of these nodes are yet available. We are working to provision them once a new network switch is installed.

Scientific Codes on the Cluster:

MPI Families

OpenMPIhttp://www.open-mpi.orgThe Open MPI Project is an open source  Message Passing Interface  implementation that is developed and maintained by a consortium of academic, research, and industry partners.
MPICHhttp://www.mpich.orgMPICH is a high-performance and widely portable implementation of the Message Passing Interface (MPI) standard (MPI-1, MPI-2 and MPI-3). 

 

Cluster HowTos:

  • Login to the Cluster
  • Loading Environments for Using Codes
  • Submitting Jobs to the Cluster
  • No labels