Seastar Configuration & Documentation
- Using Seastar : PDF-version (handy for printing)
- Using Seastar : HTML-version (handy for online reading)
I wrote "Using Seastar" in LaTeX, this is not a language made for the web, so if would suggest using the pdf-version, even if you read it online...
- Backup server: 2*AMD opteron dual core 2.4GHz, 8GB RAM, 12 * 2 TB Sata-II disks , 2*Gb Ethernet (24 TB storage);
- 2 storage servers: 2*quad-core Xeon 2.13GHz, 6GB RAM, 12 * 600GB SAS disks (14.4 TB storage);
- 5 compute nodes: 2*dual-core AMD opteron 280, 2.4GHz, 16GB RAM, 70GB Sata disks, Gb Ethernet (20 CPU-cores, 350 GB storage, 80 GB RAM);
- 8 compute nodes: 2*dual-core AMD opteron 2220 SE, 2.8GHz, 16GB RAM, 70GB Sata disks, Gb Ethernet (32 CPU-cores, 560 GB storage, 128 GB RAM);
- 1 compute node: 2*dual-core AMD opteron 2220 SE, 2.8GHz, 64GB RAM, 70GB Sata disks, Gb Ethernet (4 CPU-cores, 70 GB storage, 64 GB RAM);
- 7 server-blades: 4*quad-core AMD opteron 8378, 2.4GHz CPU, 64GB RAM, 70GB Sata disks, 111.3 GFlop, Infiniband (112 CPU-cores, 490 GB storage, 448 GB RAM, 779.1 GFLOP);
- 3 server-blades: 4*six-core AMD opteron 8431, 2.4GHz CPU, 64GB RAM, 160GB Sata disks, 143.3 GFlop, Infiniband (72 CPU-cores, 480 GB storage, 192 GB RAM, 429.9 GFLOP);
- HP Procurve 24-port Gb Ethernet switch;
- D-link 24-port Gb Ethernet switch;
- Infiniband 24-port 20Gb switch;
- 1 GPU node: TYAN FT77-B7015, 2*quad-core Xeon E5620, 2.4 GHz CPU, 24 GB RAM, 7 x NVIDIA GeForce GTX 580 cards (3 GB, 1581 GFLOPS SP, 198 GFLOPS), Infiniband;
- 1 GPU node: TYAN FT77-B7015, 2*quad-core Xeon E5620, 2.4 GHz CPU, 24 GB RAM, 6 x NVIDIA GeForce GTX 580 cards (1.5 GB, 1581 GFLOPS SP, 198 GFLOPS), Infiniband;
- 1 GPU node: TYAN FT77-B7015, 2*quad-core Xeon E5620, 2.4 GHz CPU, 56 GB RAM, 6 x NVIDIA Tesla M2070 cards (5 GB, 1288 GFLOPS SP, 515 GFLOPS DP), Infiniband.
Tips & tricks
Please note that to compile your code you may need to load the compilers e.g. using command module load ifort. To get the list of available modules you type module avail.
You submit the job using command qsub (e.g. qsub submit.sh). You can see the status of your current jobs using qstat. The command qstat -q lists the summary of all currently running jobs on all queues. To delete the job use qdel followed by the job number. Example submit script is located above.
The default quota in your home directory is only 2.5 GB. You are therefore strongly advised to run jobs from your data directory ($VSC_DATA) and not from your home directory.
To get the custom commands replacements (like command '..' being equivalent to 'cd ..' etc.) you should edit your .bashrc file located in your home directory. For example adding line alias ..='cd ..' and restarting the connection to Turing will make the above mentioned command '..' work.