The NIG Supercomputer

Grid Engine Queue Type

This is an old document

This document is a former NIG supercomputer (2019) document and is kept for reference purposes.

Please note that it does not work in the same way on the current NIG supercomputer (2025).

Compute nodes managed by the Grid Engine are broadly devided into interactive nodes and compute nodes.

Compute requests called jobs to login and compute nodes are managed in Grid Engine with queues. Jobs wait in queue and are automatically executed by Grid Engine as soon as a computer is available if calculation requirements exceed computer resources.

In the general analysis division of the NIG supercomputer, there is a Grid Engine queue for each type of compute node.

Compute nodes Type Grid Engine queue name Hardware type Number of computers and total cores
Interactive nodes login.q Thin nodes Type1b
(AMD EPYC7702, 128 CPU cores/node,
4GB memory/CPU core)
three computers
384 cores
login_gpu.q Thin nodes Type2b
(Intel Xeon Gold 6136, 24 CPU cores/node,
16GB memory/CPU core)
one computer
24 cores
Compute Nodes epyc.q Thin nodes Type1b
(AMD EPYC7702, 128 CPU cores/node,
4GB memory/CPU core)
25 computers
3200 cores
intel.q Thin nodes Type2a
(Intel Xeon Gold 6130, 32 CPU cores/node,
12GB memory/CPU core)
32 computers
1024 cores
gpu.q Thin nodes Type2b
(Intel Xeon Gold 6136, 24 CPU cores/node,
16GB memory/CPU core)
seven computers
168 cores
short.q Thin nodes Type1a
(AMD EPYC7501, 64 CPU cores/node,
8GB memory/CPU core)
two computers
128 cores
medium.q Medium nodes
(Intel Xeon Gold 6148, 80 CPU cores/node,
38.4GB moemory/CPU core)
ten computers
800 cores
medium-ubuntu.q