Skip to main content

Hardware

Overview

ClassificationSpecifications

Compute Nodes





15,424 CPU cores

933.560 TFLOPS
(CPU: 434.360 TFLOPS, GPU: 499.2 TFLOPS)

153.088 TB total memory

Thin nodes
Total: 232 units

Total number of CPU cores: 14,336
Total computing performance: 844.472 TFLOPS
(CPU: 345.272 TFLOPS, GPU: 499.2 TFLOPS)
Total memory capacity 110.080 TB

Type 1a
AMD EPYC 7501 CPU.

136 nodes
8,704 CPU cores
139.264 TFLOPS
69.632 TB total memory(8GB memory/CPU core)

Type 1b
AMD EPYC 7702 CPU. (Expansion in April 2020)

28 nodes
3,584 CPU cores
57.344 TFLOPS
14.336 TB total memory (4GB memory/CPU core)

Type 2a
Intel Xeon Gold 6130 CPU

52 nodes
1,664 CPU cores
111.800 TFLOPS
19.968 TB total memory (12GB memory/CPU core)

Type 2b
GPU installed

16 nodes
384 CPU cores
64GPUs (4 GPU/node)
536.064 TFLOPS
(CPU: 36.864 TFLOPS, GPU: 499.2 TFLOPS)
6.144 TB total memory (16GB moemory/CPU core)

Medium node
3TB of shared memory installed

10 nodes
800 CPU cores
61.440 TFLOPS
30.72 TB total memory (38.4GB memory/CPU core)

Fat node
12TB of shared memory

1 node
288 CPU cores
27.648 TFLOPS
12.288 TB total memory (42.7GB memory/CPU core)

Storage



Total storage capacity: 57.6PB

Analysis storage(*1)
For user home directories in the general analysis division and personal genome analysis division.

Lustre file system
13.3PB

Database storage
For DDBJ database including DRA

Lustre file system
40.5PB

Inter-node interconnect network

InfiniBand 4×EDR 100Gbps fat tree
(For storage, full bi-division; for compute nodes, connection bandwidth to upstream SW : connection bandwidth to downstream SW = 1:4)

Compute nodes

Thin compute node Type 1a (HPE ProLiant DL385 Gen10; 136 computers)

Compute nodes with AMD EPYC 7501 processors.


HPE ProLiant DL385 Gen10 (host name: at001 -- at136)

componentmodel numbernumber of computationperformance par node, etc
CPUAMD EPYC 7501 (32 cores) Base 2.0GHz, Max 3.0GHz2Total 64 core
Memory32GB DDR4-266616Total 512GB (8GB par CPU core)
Storage1.6TB NVMe SSD x1, 3.2TB NVMe SSDx1
NetworkInfiniBand 4xEDR1100Gbps

Thin compute node Type 1b (DELL PowerEdge R6525; 28 computers)

Compute nodes with AMD EPYC 7702 processors.

DELL PowerEdge R6525 (host name: at137 -- at164)

componentmodel numbernumber of computationperformance par node, etc
CPUAMD EPYC 7702 (64 cores) Base 2.0GHz, Max 3.35GHz2Total 128 core
Memory32GB DDR4-266616Total 512GB (4GB par CPU core)
Storage1.6TB NVMe SSD x1, 900GB SAS HDDx1
NetworkInfiniBand 4xEDR1100Gbps

Thin compute node Type 2a (HPE Apollo 2000 Gen10; 52 computers)

Compute nodes with Intel Xeon processors.

HPE Apollo 2000 Gen10 (host name: it001 -- it052)

componentmodel numbernumber of computationperformance par node, etc
CPUIntel Xeon Gold 6130 (16 cores) Base 2.1GHz, Max 3.7GHz2Total 32 core
Memory32GB DDR4-266612Total 386GB (12GB per CPU core )
Storage1.6TB NVMe SSD x1, 3.2TB NVMe SSDx1
NetworkInfiniBand 4xEDR1100Gbps

Thin compute node Type 2b (HPE Apollo 6500 Gen10; 16 computers)

Compute nodes with four GPUs on each node.

HPE Apollo 6500 Gen10 (host name: igt001 -- igt016)

componentmodel numbernumber of computationperformance par node, etc
CPUIntel Xeon Gold 6136 (12 cores) Base 3.0GHz, Max 3.7GHz2Total 24 core
Memory32GB DDR4-266612Total 386GB (16GB per CPU core)
GPUNVIDIA Tesla V100 SXM24
Storage1.6TB NVMe SSD x1, 3.2TB NVMe SSDx1
NetworkInfiniBand 4xEDR1100Gbps

(Reference) GPU Specifications

PropertiesValue
nameNVIDIA Tesla V100 SXM2
number of core640
clock speed1,455MHz
peak performance of single precision floating point number15TFLOPS
peak performance of double precision floating point number7.5TFLOPS
single core theoretical performance1.3GLOPS
memory size16GB(GDDR5)
memory bandwidth900GB/sec
memory bandwidth per 1GFLOPS266GB/sec
connection bandwidth8 (PCIe2.0 x16)GB/sec

Medium compute node (HPE ProLiant DL560 Gen10; 10 computers)

The medium nodes hardware (that is, HPE ProLiant DL560 Gen10 and HPE Superdome Flex) do not support Ubuntu Linux and could not be migrated from Cent OS 7.9 to Ubuntu Linux 22.04 at the scheduled maintenance in November 2023.

These nodes are compute nodes with 80 cores with 3 TB of physical memory, suitable for running large memory intensive programs such as de novo assembler, etc. You can use it by job submission under Grid Engine.

HPE ProLiant DL560 Gen10 (host name: m01 -- m10)

componentmodel numbernumber of computationperformance par node, etc
CPUIntel Xeon Gold 6148 (20 cores) Base 2.4GHz, Max 3.7GHz4Total 80 core
Memory32GB DDR4-266648Total 3,072GB (38.4GB per CPU core)
Storage1TB SATA HDD21TB (RAID1)
NetworkInfiniBand 4xEDR1100Gbps

Fat compute node (HPE Superdome Flex; one computer)

The fat node hardware (that is, HPE ProLiant DL560 Gen10 and HPE Superdome Flex) do not support Ubuntu Linux and could not be migrated from Cent OS 7.9 to Ubuntu Linux 22.04 at the scheduled maintenance in November 2023.

A total of 12 TB of shared memory compute nodes are configured by connecting HPE Superdome Flex 2 chassis with Superdome Flex grid interconnects.

You can use FAT nodes by application only.

HPE Superdome Flex (host name: fat1)

componentmodel numbernumber of computationperformance par node, etc
CPUIntel Xeon Gold 6148 (20 cores) Base 2.4GHz, Max 3.7GHz16Total 288 core
Memory32GB DDR4-2666192Total 12,288GB (47.2GB per CPU core)
Storage1.2TB SAS HDD42.4TB (RAID1)
NetworkInfiniBand 4xEDR1100Gbps

Storage

Analysis storage

access pathEffective CapacityUsagePeak PerformanceConfiguration
/lustre78.0PBhome directories in the general analysis division35GB/sec or moreDDN SFA14KXE+SS9012, DDN 1U server, DDN SFA7700X
/lustre85.3PBhome directories in the personal genome analysis division35GB/sec or moreDDN SFA14KXE+SS9012, DDN 1U server, DDN SFA7700X

Database storage

access pathEffective CapacityUsagePeak PerformanceConfiguration
/lustre940.5PBDDBJ work150GB/secDDN ES400NVX2 + DDN SS9024