Nodes in Discovery

Discovery consists of a login node, head node, and compute nodes. Some useful information about these nodes are as follows:

Login Node

Whenever you log in to the discovery, you are directed to the login node. You can use this login node to install software but don’t run any computations in it. In Discovery, discovery-l2 is the login node.

Remember not to run any programs in the login node because it could damage the node if improperly executed.

Head Node

A head node is a simply configured system that acts as an intermediate between the compute nodes and the login node. You can run the jobs on the compute nodes by using the Slurm scheduling system tools on the head node. discovery-h2 is the head node in the Discovery.

Compute Nodes

Discovery has 54 compute nodes which are categorized into 38 CPU nodes and 16 GPU nodes. The compute nodes are the nodes where your programs and jobs are/should be executed.

CPU Nodes

There are 38 CPU nodes available in Discovery and the table below shows the hardware configuration of the CPU nodes.

  • SLURM Resources

  • Physical Resources

Compute Nodes CPU Per Node(Threads) RAM(GB) Available Features

discovery-c[1–6]

28

49

intel, ht, haswell, E5-2640V3

discovery-c[7–13]

44

112

intel, ht, broadwell, E5-2650V4

discovery-c[14–15]

44

238

intel, ht, broadwell, E5-2650V4

discovery-c[16–25]

52

175

intel,ht,skylake,xeon-gold-5117

discovery-c[26–35]

60

364

intel, ht, cascade-lake, xeon-gold-6226r

discovery-c36

60

364

intel,ht,cascade-lake,xeon-gold-5218t

discovery-c[37–38]

124

2963

intel,ht,cascade-lake,xeon-gold-5218,optane,optane-mem

Compute Nodes CPU CPU Per Node Core Per CPU Cores/Threads Per Node RAM(GB/TB)

discovery-c[1–6]

Intel E5-2640 v3 2.6G

2

8

16/32

64 GB

discovery-c[7–13]

Intel E5-2650 v4 2.2G

2

12

24/48

128 GB

discovery-c[14–15]

Intel E5-2650 v4 2.2G

2

12

24/48

256 GB

discovery-c[16–25]

Intel Xeon Gold 5117 2.0G

2

14

28/56

192 GB

discovery-c[26–35]

Intel Xeon Gold 6226R 2.9G

2

16

32/64

384 GB

discovery-c36

Intel Xeon Gold 5218T 2.1G

2

16

32/64

384 GB

discovery-c[37–38]

Intel Xeon Gold 5218 2.3G

4

16

64/128

3 TB

Compute nodes are generally not accessed by the HPC user. Sometimes, you may need to check the data generated by a program on the compute nodes, which may not be clear from the head node. Under those circumstances, what you can do is that you can submit interactive jobs using Slurm to check the results of the compute nodes.

Please refer to the page → Interactive jobs in Slurm for more insights.

GPU Nodes

If your job needs GPU, Discovery has 16 GPU nodes and you are always welcome to use it as well. Configuration about the GPU nodes are listed in the following table:

  • SLURM Resources

  • Physical Resources

GPU Nodes CPU Per Node(Threads) RAM(GB) Available Features GPU Per Node GPU Memory

discovery-g1

28

49

intel,ht,haswell, E5-2640V3,gpu,k40m,k40m-11g

2

11 GB

discovery-g[2–6]

52

175

intel,ht,skylake,xeon-gold-5117,gpu,p100,p100-16g

2

16 GB

discovery-g[7]

52

175

intel,ht,skylake,xeon-gold-5120,gpu,v100,v100-16g

2

16 GB

discovery-g[8–11]

60

175

intel,ht,cascade-lake,xeon-gold-5218,gpu,v100,v100-32g

2

32 GB

discovery-g[12–13]

60

491

amd,ht,rome,epyc-7282,gpu,a100,a100-40g

2

40 GB

discovery-g[14–15]

60

491

amd,ht,rome,epyc-7282,gpu,mig,a100_1g.5gb

14

5 GB

discovery-g16

44

175

intel,ht,skylake,xeon-gold-5118,gpu,t4,t4-16g

2

16 GB

GPU Nodes CPU CPU Per Node Core Per CPU Cores/Threads Per Node RAM(GB) GPU GPU Per Node

discovery-g1

Intel E5-2640 v3 2.6G

2

8

16/32

64

Nvidia Tesla K40

2

discovery-g[2–6]

Intel Xeon Gold 5117 2.0G

2

14

28/56

192

Nvidia Tesla P100

2

discovery-g[7]

Intel Xeon Gold 5120 2.2G

2

14

28/56

192

Nvidia Tesla V100

2

discovery-g[8–11]

Intel Xeon Gold 5218 2.3G

2

16

32/64

192

Nvidia Tesla V100

2

discovery-g[12–15]

AMD EPYC 7282 2.8G

2

16

32/64

512

Nvidia Tesla A100

2

discovery-g16

Intel Xeon Gold 5118 2.3G

2

12

24/48

192

Nvidia Tesla T4

2

Each Node in Discovery has feature tags assigned to it. Users can select nodes to run their jobs based on the feature tags using SBATCH or srun --constraint flag. For more information, visit the page point_right Slurm Node Features