...
Group | Group's nodes And software licensed. | Access to other groups' nodes (assume access to own group's nodes, naturally) | WebMO data (GB) | Current users' data (GB) | Inactive users' data (GB) | Back-up? (EZ-Backup) | Back-in-Time versioning | |||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Collum | 8 compute nodes. Maybe 1 more (former head node). Has paid for WebMO | Widom (Collum group paid to add hard drives to all 7 of Widom's compute nodes. 12TB (= 6TB * 2) hard drive space now available on each of Widom's nodes.) | Yes | No | ||||||||
Abruna | 10 compute nodes (first to get tested at CAC, we expect) | n/a | n/a | ~500 (11/11/16) | No | No | ||||||
Loring | 5 compute nodes. Oldest nodes on the cluster. | Widom | n/a | No | No | |||||||
Widom | Widom contributing the current head node. 8 compute nodes.
| Loring Special queue* | n/a | No | No | |||||||
SLin | (SLin currently has no compute nodes) |
Widom | n/a | No | No |
|
...
Q: How do we prevent individual from bringing shared head node to its needs by running an application directly on the head node, instead of using queues?
- Widom contributing
Future addition, hopefully:
- Abruna (n nodes as some of his 10 nodes might be used for testing CAC setup)
Guest groups using cluster under the auspices of a cluster sponsoring group(s):
...
Children Display | ||||||
---|---|---|---|---|---|---|
|
...