Skip to main content

Hardware

Overview

ClassificationSpecifications
Compute Nodes



15,424 CPU cores

933.560 TFLOPS
(CPU: 434.360 TFLOPS, GPU: 499.2 TFLOPS)

153.088 TB total memory
Thin nodesType 1a
AMD EPYC 7501 CPU.
136 nodes
8,704 CPU cores
139.264 TFLOPS
69.632 TB total memory(8GB memory/CPU core)
Type 1b
AMD EPYC 7702 CPU. (Expansion in April 2020)
28 nodes
3,584 CPU cores
57.344 TFLOPS
14.336 TB total memory (4GB memory/CPU core)
Type 2a
Intel Xeon Gold 6130 CPU
52 nodes
1,664 CPU cores
111.800 TFLOPS
19.968 TB total memory (12GB memory/CPU core)
Type 2b
GPGPU installed
16 nodes
384 CPU cores
64GPUs (4 GPU/node)
536.064 TFLOPS
(CPU: 36.864 TFLOPS, GPU: 499.2 TFLOPS)
6.144 TB total memory (16GB moemory/CPU core)
Medium node
3TB of shared memory installed
10 nodes
800 CPU cores
61.440 TFLOPS
30.72 TB total memory (38.4GB memory/CPU core)
Fat node
Two nodes connected to form 12TB of shared memory
2 node
288 CPU cores
27.648 TFLOPS
12.288 TB total memory (42.7GB memory/CPU core)
Storage

Total storage capacity: 47.1PB
Large capacity high-speed storage
Storage area for analysis(※1)
Lustre file system
17.1PB
Large capacity archive storage
Storage area for DB(※2)
SpectrumScale file system + tapes
30PB (Disk Capacity 15PB、Hierarchical tape storage 15PB)
Inter-node interconnect networkInfiniBand 4×EDR 100Gbps fat tree
(For storage, full bi-section; for compute nodes, connection bandwidth to upstream SW : connection bandwidth to downstream SW = 1:4)
  • ※1. Storage area for analysis: This area contains user home area in general analysis area and personal genome analysis area
  • ※2. Storage area for DB: This area contains DDBJ databases such as DRA. These databases can be accessed from general analysis area.

Compute nodes

Thin compute node Type 1a (HPE ProLiant DL385 Gen10; 136 computers)

Compute nodes with AMD EPYC 7501 processors.


HPE ProLiant DL385 Gen10 (host name: at001 -- at136)

componentmodel numbernumber of computationperformance par node, etc
CPUAMD EPYC 7501 (32 cores) Base 2.0GHz, Max 3.0GHz2Total 64 core
Memory32GB DDR4-266616Total 512GB (8GB par CPU core)
Storage1.6TB NVMe SSD x1, 3.2TB NVMe SSDx1
NetworkInfiniBand 4xEDR1100Gbps

Thin compute node Type 1b (DELL PowerEdge R6525; 28 computers)

Compute nodes with AMD EPYC 7702 processors.

DELL PowerEdge R6525 (host name: at137 -- at164)

componentmodel numbernumber of computationperformance par node, etc
CPUAMD EPYC 7702 (64 cores) Base 2.0GHz, Max 3.35GHz2Total 128 core
Memory32GB DDR4-266616Total 512GB (4GB par CPU core)
Storage1.6TB NVMe SSD x1, 900GB SAS HDDx1
NetworkInfiniBand 4xEDR1100Gbps

Thin compute node Type 2a (HPE Apollo 2000 Gen10; 52 computers)

Compute nodes with Intel Xeon processors.

HPE Apollo 2000 Gen10 (host name: it001 -- it052)

componentmodel numbernumber of computationperformance par node, etc
CPUIntel Xeon Gold 6130 (16 cores) Base 2.1GHz, Max 3.7GHz2Total 32 core
Memory32GB DDR4-266612Total 386GB (12GB per CPU core )
Storage1.6TB NVMe SSD x1, 3.2TB NVMe SSDx1
NetworkInfiniBand 4xEDR1100Gbps

Thin compute node Type 2b (HPE Apollo 6500 Gen10; 16 computers)

Compute nodes with four GPUs on each node.

HPE Apollo 6500 Gen10 (host name: igt001 -- igt016)

componentmodel numbernumber of computationperformance par node, etc
CPUIntel Xeon Gold 6136 (12 cores) Base 3.0GHz, Max 3.7GHz2Total 24 core
Memory32GB DDR4-266612Total 386GB (16GB per CPU core)
GPUNVIDIA Tesla V100 SXM24
Storage1.6TB NVMe SSD x1, 3.2TB NVMe SSDx1
NetworkInfiniBand 4xEDR1100Gbps

(Reference) GPU Specifications

PropertiesValue
nameNVIDIA Tesla V100 SXM2
number of core640
clock speed1,455MHz
peak performance of single precision floating point number15TFLOPS
peak performance of double precision floating point number7.5TFLOPS
single core theoretical performance1.3GLOPS
memory size6GB(GDDR5)
memory bandwidth900GB/sec
memory bandwidth per 1GFLOPS266GB/sec
connection bandwidth8 (PCIe2.0 x16)GB/sec

Medium compute node (HPE ProLiant DL560 Gen10; 10 computers)

These nodes are compute nodes with 80 cores with 3 TB of physical memory, suitable for running large memory intensive programs such as de novo assembler, etc. You can use it by job submission under UGE.

HPE ProLiant DL560 Gen10 (host name: m01 -- m10)

componentmodel numbernumber of computationperformance par node, etc
CPUIntel Xeon Gold 6148 (20 cores) Base 2.4GHz, Max 3.7GHz4Total 80 core
Memory32GB DDR4-266648Total 3,072GB (38.4GB per CPU core)
Storage1TB SATA HDD21TB (RAID1)
NetworkInfiniBand 4xEDR1100Gbps

Fat compute node (HPE Superdome Flex; one computer)

This compute node for the NUMA (Non Uniformed Memory Access) architecture, which connects multiple compute nodes to build a large shared memory type compute system.

You can use FAT nodes by application only.

HPE Superdome Flex (host name: fat1)

componentmodel numbernumber of computationperformance par node, etc
CPUIntel Xeon Gold 6148 (20 cores) Base 2.4GHz, Max 3.7GHz16Total 288 core
Memory32GB DDR4-2666192Total 12,288GB (47.2GB per CPU core)
Storage1.2TB SAS HDD21.2TB (RAID1)
NetworkInfiniBand 4xEDR1100Gbps

Storage

High-speed storage : Lustre file systems

access pathEffective CapacityUsagePeak PerformanceConfiguration
/lustre63.8PBDDBJ work35GB/secDDN SFA14KXE+SS8462, DDN 1U server, DDN SFA7700X
/lustre78.0PBHome area of general analysis area35GB/sec or moreDDN SFA14KXE+SS9012, DDN 1U server, DDN SFA7700X
/lustre5.3PBHome area of personal genome analysis area35GB/sec or moreDDN SFA14KXE+SS9012, DDN 1U server, DDN SFA7700X

Large archive storage

This storage is used for DDBJ work such as storing DDBJ databases such as DRA and in not published for general users. To increase capacity, it is a hierarchical storage system that uses a high-speed disk system and a tape system with low per-capacity cost.

ComponentsModel numberEffective capacity, performance, etc.
Large Storage Disk SystemIBM Elastic Storage Server GL6S12.9PB, read 36.6GB/s, write 29.0GB/s
Large Storage Tape SystemIBM TS4500 Tape Libraries15PB (uncompressed)
Tape CartridgeIBM 3592JD Cartridges
Tape DriveIBM TS1155 (x8 tape drivers)360MB/s per drive for R/W
Hierarchical Storage Management SystemSpectrumScale Server