Excerpt |
---|
Specific directions on accessing and using the Collum-Loring-Abruna-Widom ( CLAW ) cluster |
Table of Contents |
---|
See also
Nodes on shared cluster are
...
NOT shared between groups, by default
By default, each group can only use their group's compute nodes.
...
- Collum group may not use Loring's nodes.
- And Loring's group may not use Collum's nodes.
Table
...
of information, by group
Group | Group's nodes And software licensed. | Access to other groups' nodes (assume access to own group's nodes, naturally) | WebMO data (GB) | Current users' data (GB) | Inactive users' data (GB) | Back-up? (EZ-Backup) |
---|
Back-in-Time
versioning
Collum | 8 compute nodes |
Maybe 1 more (former head node) |
Has paid for WebMO | Widom (Collum group paid to add hard drives to all 7 of Widom's compute nodes. 12TB (= 6TB * 2) hard drive space now available on each of Widom's nodes.) |
(To get from Lulu) | (To get from Lulu) | (To get from Lulu) |
Yes |
Abruna |
10 compute nodes
Maybe 1 more (former head node) |
n/a | n/a | ~500 (11/11/16) |
(To get from Lulu) | No |
Loring | 5 compute nodes |
Oldest nodes on the cluster. | Widom | n/a |
(To get from Lulu) | (To get from Lulu) | No |
Widom | Widom contributing the current head node. 8 compute nodes |
| Loring Special queue* | n/a |
(To get from Lulu) | (To get from Lulu) |
No |
SLin | (SLin currently has no compute nodes) |
Widom | n/a |
(To get from Lulu) | (To get from Lulu) | No |
*Special queue: Queue was created and for use by a single member of Widom's group to enable their research without consuming 100% of Widom's nodes.
...
Q: Enable WebMO for groups other than Collum? If so, add column with nodes enabled.
Q: Enable Back-in-Time, for versioning? If so, how provision and who pays what?
- Does CAC have better options, if cluster is hosted there?
Q: Where does storage occur? How provision and who pays for what?
- Does CAC have better options, if cluster is hosted there?
Q: Backing up user accounts and other research data? If beyond Collum's group, how provision and who pays what?
- Does CAC have better options, if cluster is hosted there?
Know issues:
Q: How do we prevent individual from bringing shared head node to its needs by running an application directly on the head node, instead of using queues?
- Widom contributing
Future addition, hopefully:
- Abruna (n nodes as some of his 10 nodes might be used for testing CAC setup)
Guest groups using cluster under the auspices of a cluster sponsoring group(s):
- Song Lin ( 0 nodes )
(Does CAC have suggestions?)
Children Display | ||||||||
---|---|---|---|---|---|---|---|---|
|