Specific directions on accessing and using the Collum-Loring-Abruna-Widom ( CLAW ) cluster |
By default, each group can only use their group's compute nodes.
The following groups are sharing nodes:
Widom's nodes are available to the Collum group.
Widom and Loring share each other's nodes. To clarify:
Group | Group's nodes And software licensed. | Access to other groups' nodes (assume access to own group's nodes, naturally) | WebMO data (GB) | Current users' data (GB) | Inactive users' data (GB) | Back-up? (EZ-Backup) |
---|---|---|---|---|---|---|
Collum | 8 compute nodes Maybe 1 more (former head node) Has paid for WebMO | Widom (Collum group paid to add hard drives to all 7 of Widom's compute nodes. 12TB (= 6TB * 2) hard drive space now available on each of Widom's nodes.) | (To get from Lulu) | (To get from Lulu) | (To get from Lulu) | Yes |
Abruna | 10 compute nodes
Maybe 1 more (former head node) | n/a | n/a | ~500 (11/11/16) | (To get from Lulu) | No |
Loring | 5 compute nodes Oldest nodes on the cluster. | Widom | n/a | (To get from Lulu) | (To get from Lulu) | No |
Widom | Widom contributing the current head node. 8 compute nodes
| Loring Special queue* | n/a | (To get from Lulu) | (To get from Lulu) | No |
SLin | (SLin currently has no compute nodes) |
Widom | n/a | (To get from Lulu) | (To get from Lulu) | No |
*Special queue: Queue was created and for use by a single member of Widom's group to enable their research without consuming 100% of Widom's nodes.
Q: Enable WebMO for groups other than Collum? If so, add column with nodes enabled.
Q: Enable Back-in-Time, for versioning? If so, how provision and who pays what?
Q: Where does storage occur? How provision and who pays for what?
Q: Backing up user accounts and other research data? If beyond Collum's group, how provision and who pays what?
Q: How do we prevent individual from bringing shared head node to its needs by running an application directly on the head node, instead of using queues? (Does CAC have suggestions?)