Hardware
Overview
Classification | Specifications | ||
---|---|---|---|
Compute Nodes 15,424 CPU cores | Thin nodes
| Type 1a | 136 nodes |
Type 1b | 28 nodes | ||
Type 2a | 52 nodes | ||
Type 2b | 16 nodes | ||
Medium node | 10 nodes | ||
Fat node | 1 node | ||
Storage Total storage capacity: 57.6PB | Analysis storage(*1) | Lustre file system | |
Database storage | Lustre file system | ||
Inter-node interconnect network | InfiniBand 4×EDR 100Gbps fat tree |
Compute nodes
Thin compute node Type 1a (HPE ProLiant DL385 Gen10; 136 computers)
Compute nodes with AMD EPYC 7501 processors.
HPE ProLiant DL385 Gen10 (host name: at001 -- at136)
component | model number | number of computation | performance par node, etc |
---|---|---|---|
CPU | AMD EPYC 7501 (32 cores) Base 2.0GHz, Max 3.0GHz | 2 | Total 64 core |
Memory | 32GB DDR4-2666 | 16 | Total 512GB (8GB par CPU core) |
Storage | 1.6TB NVMe SSD x1, 3.2TB NVMe SSDx1 | ||
Network | InfiniBand 4xEDR | 1 | 100Gbps |
Thin compute node Type 1b (DELL PowerEdge R6525; 28 computers)
Compute nodes with AMD EPYC 7702 processors.
DELL PowerEdge R6525 (host name: at137 -- at164)
component | model number | number of computation | performance par node, etc |
---|---|---|---|
CPU | AMD EPYC 7702 (64 cores) Base 2.0GHz, Max 3.35GHz | 2 | Total 128 core |
Memory | 32GB DDR4-2666 | 16 | Total 512GB (4GB par CPU core) |
Storage | 1.6TB NVMe SSD x1, 900GB SAS HDDx1 | ||
Network | InfiniBand 4xEDR | 1 | 100Gbps |
Thin compute node Type 2a (HPE Apollo 2000 Gen10; 52 computers)
Compute nodes with Intel Xeon processors.
HPE Apollo 2000 Gen10 (host name: it001 -- it052)
component | model number | number of computation | performance par node, etc |
---|---|---|---|
CPU | Intel Xeon Gold 6130 (16 cores) Base 2.1GHz, Max 3.7GHz | 2 | Total 32 core |
Memory | 32GB DDR4-2666 | 12 | Total 386GB (12GB per CPU core ) |
Storage | 1.6TB NVMe SSD x1, 3.2TB NVMe SSDx1 | ||
Network | InfiniBand 4xEDR | 1 | 100Gbps |
Thin compute node Type 2b (HPE Apollo 6500 Gen10; 16 computers)
Compute nodes with four GPUs on each node.
HPE Apollo 6500 Gen10 (host name: igt001 -- igt016)
component | model number | number of computation | performance par node, etc |
---|---|---|---|
CPU | Intel Xeon Gold 6136 (12 cores) Base 3.0GHz, Max 3.7GHz | 2 | Total 24 core |
Memory | 32GB DDR4-2666 | 12 | Total 386GB (16GB per CPU core) |
GPU | NVIDIA Tesla V100 SXM2 | 4 | |
Storage | 1.6TB NVMe SSD x1, 3.2TB NVMe SSDx1 | ||
Network | InfiniBand 4xEDR | 1 | 100Gbps |
(Reference) GPU Specifications
Properties | Value |
---|---|
name | NVIDIA Tesla V100 SXM2 |
number of core | 640 |
clock speed | 1,455MHz |
peak performance of single precision floating point number | 15TFLOPS |
peak performance of double precision floating point number | 7.5TFLOPS |
single core theoretical performance | 1.3GLOPS |
memory size | 16GB(GDDR5) |
memory bandwidth | 900GB/sec |
memory bandwidth per 1GFLOPS | 266GB/sec |
connection bandwidth | 8 (PCIe2.0 x16)GB/sec |
Medium compute node (HPE ProLiant DL560 Gen10; 10 computers)
These nodes are compute nodes with 80 cores with 3 TB of physical memory, suitable for running large memory intensive programs such as de novo assembler, etc. You can use it by job submission under Grid Engine.
HPE ProLiant DL560 Gen10 (host name: m01 -- m10)
component | model number | number of computation | performance par node, etc |
---|---|---|---|
CPU | Intel Xeon Gold 6148 (20 cores) Base 2.4GHz, Max 3.7GHz | 4 | Total 80 core |
Memory | 32GB DDR4-2666 | 48 | Total 3,072GB (38.4GB per CPU core) |
Storage | 1TB SATA HDD | 2 | 1TB (RAID1) |
Network | InfiniBand 4xEDR | 1 | 100Gbps |
Fat compute node (HPE Superdome Flex; one computer)
A total of 12 TB of shared memory compute nodes are configured by connecting HPE Superdome Flex 2 chassis with Superdome Flex grid interconnects.
You can use FAT nodes by application only.
HPE Superdome Flex (host name: fat1)
component | model number | number of computation | performance par node, etc |
---|---|---|---|
CPU | Intel Xeon Gold 6154 (18 cores) Base 3.0GHz, Max 3.7GHz | 16 | Total 288 core |
Memory | 32GB DDR4-2666 | 192 | Total 12,288GB (47.2GB per CPU core) |
Storage | 1.2TB SAS HDD | 4 | 2.4TB (RAID1) |
Network | InfiniBand 4xEDR | 1 | 100Gbps |
Storage
Analysis storage
access path | Effective Capacity | Usage | Peak Performance | Configuration |
---|---|---|---|---|
/lustre7 | 8.0PB | home directories in the general analysis division | 35GB/sec or more | DDN SFA14KXE+SS9012, DDN 1U server, DDN SFA7700X |
/lustre8 | 5.3PB | home directories in the personal genome analysis division | 35GB/sec or more | DDN SFA14KXE+SS9012, DDN 1U server, DDN SFA7700X |
Database storage
access path | Effective Capacity | Usage | Peak Performance | Configuration |
---|---|---|---|---|
/lustre9 | 40.5PB | DDBJ work | 150GB/sec | DDN ES400NVX2 + DDN SS9024 |