You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

 Use of the codes installed on the CNF cluster is on a self-support basis.

For training on navigating UNIX/LINUX and on using the batch system for the cluster, please contact CNF Computing staff.

Overview:

The Nanolab cluster provides users of the CNF the opportunity to use a wide range of modeling software tailored for nanoscale systems. Several classes of nodes, all linked via Gigabit Ethernet, are available on the cluster. The cluster runs Scientific Linux 7 with OpenHPC. Slurm is the batch job queuing system.

Head Node:

The head node is a Dell PowerEdge R740xd with approximately 9 TeraBytes of shared disk space available to users. Via NFS, user home directories are shared out to all of the compute nodes.

dn Compute Nodes:

This class comprises two Dell PowerEdge R640 systems. Each node has 256GB of memory and 2 Intel Xeon Gold 6136 processors. Each individual processor has 12 cores and 2 threads per core. This equals 48 virtual CPUs per node.

rb2u Compute Nodes:

At present, three of the twelve nodes in this class are online. We are working to provision the rest. Each node has 32GB of memory and 2 Intel Xeon E5-2620 processors. Each individual processor has 6 cores and 2 threads per core. This equals 24 virtual CPUs per node.

rb1u Compute Nodes:

None of these nodes are yet available. We are working to provision them once a new network switch is installed.

  • No labels