Site Policy
2017 年 08 月 17 日

System configuration drawing

As of 2012, the system is configured as shown below. System enhancements in which the CPU and disk resources will be doubled are being planned for 2014.

Compute node configuration

As of March 2012, compute nodes of the following specifications and types are available.

Node typeCPU modelNo. of CPUs

No. of cores(per CPU)

No. of cores(total per node)

Memory capacityGPGPUSSDSSD    Network (per node)Host nameNo. of nodesNo. of core(total)Remarks
Fat compute node Intel Xeon E7-8837 96 8 768 10TB N/A N/A InfiniBand QDR×6 fat 1 768 HyperThread:OFF
TurboBoost:ON
Medium compute node Intel Xeon E7-4870 8 10 80 2TB N/A N/A InfiniBand QDR×1 m1,m2 2 160 HyperThread:OFF
TurboBoost:ON
Thin compute node Intel Xeon E5-2670 2 8 16 64GB N/A N/A InfiniBand QDR×1 t141-t352 212 3392 HyperThread:OFF
TurboBoost:ON
Thin compute node (equipped with GPGPU) Intel Xeon E5-2670 2 8 16 64GB Tesla M2090×1 SSD(400GB)×1
(per node)
InfiniBand QDR×1 t077-t140 64 1024 HyperThread:OFF
TurboBoost:ON
Thin compute node (equipped with SSD) Intel Xeon E5-2670 2 8 16 64GB N/A SSD(400GB)×1
per node
InfiniBand QDR×1 t001-t076 76 1216 HyperThread:OFF
TurboBoost:ON

Please note that the type of CPU varies according to whether the node is a Fat, Medium, or Thin compute node. In addition, for a Thin node, the node may be removed from the above units and used for other purposes, and the available number of units may change without prior notice. Please see the system operation status for the currently available number of nodes.

Specifications for each CPU

The basic specifications for each CPU are provided as follows (cited from Intel’s homepage):

Processor nameXeon E5-2670Xeon E7-4870Xeon E7-8837
Codename Sandy-Bridge EP Westmere EX Westmere EX
Release timing First quarter of 2012 Second quarter of 2011 Second quarter of 2011
Number of cores 8 10 8
Number of physical threads 16 20 8
Clock speed 2.6GHz 2.4GHz 2.66GHz
Theoretical operation performance (per CPU) 166.4GFLOPS 76.8GFLOPS 85.12GFLOPS
Maximum Turbo boost frequency 3.3GHz 2.8GHz 2.8GHz
Cache 20MB 30MB Intel SmartCache 24MB Intel SmartCache
Bus/Core ratio 33 18 20
Bus Type QPI QPI QPI
System Bus 8GT/s 6.4GT/s 6.4GT/s
# of QPI Links 2 1 1
Corresponding command expansion AVX SSE4.1/4.2 SSE4.1/4.2

A characteristic of E5-2670 (development codename SandyBridge) is that it supports Intel AVX, which is a new set of extension commands. The operation width is twice the conventional size when using AVX, and dramatic improvements in performance have been addressed for floating-point number operations. It is better to use the Thin node for software supporting AVX.

The specifications for GPGPU equipped on the GPU-mounted node in the Thin node are as follows:

GPU name Tesla M2090
Double-precision floating-point operation peak performance 665GFLOPS
Single-precision floating-point operation peak performance 1331GFLOPS
Number of CUDA cores 512
Memory size 6GB(GDDR5)
Memory bandwidth (ECC OFF) 177GB/sec

Each compute node is connected to an InfiniBand switch fabric of one full bisection; thus, it is possible to conduct communication without the nodes affecting the bandwidth used by others.

Recommended purposes for each compute node

Fat compute node

A Fat compute node is a server that uses the Non-Uniformed Memory Access (NUMA) architecture mounted with 10 TB of physical memory. It is a large server that can utilize a single memory address space of size up to 10 TB from a single process and so is suited for use by multi-threaded programs (such as de novo assembler at large-scale assembly, Velvet, AllpathsLG, and so forth) that require large memory address spaces in a single process. However, the processor is older than the Thin compute node by one generation. There is only one Fat node, which is shared by all users. Please sufficiently examine in advance the program to be used, necessary memory size, calculation algorithm to be tried, and so forth.

Medium compute node

This compute node is mounted with 80 cores and 2 TB physical memory. It is suitable for executing programs that require large memories, although not as large as that of the Fat compute node.

Thin compute node

This compute node is equipped with two units of Intel Xeon CPU E5-2670, which is the latest CPU for servers as of April 2012. As the single-unit CPU performance is highest for the Thin compute node in this configuration, please use Thin compute nodes for applications corresponding to MPI parallel, embarrassingly parallel jobs without dependency among tasks, and jobs that conduct a large amount of parallel IO from multiple nodes.
In addition, some of the nodes are equipped with GPGPU (Tesla M2090) and SSD.

In principle, these compute nodes need to be used via the job management system. For specific procedures, please see How to use the system.

Internal network configuration

The compute nodes are connected in full bisection with InfiniBand QDR × 1. In addition, all compute nodes are connected to the InfiniBand core switch group, and the core switches are connected to the firewall for the Supercomputer with 10 GbE × 4.

Storage configuration

The NIG Cluster provides the following disk domains classified largely by performance and purpose:

Type of storageMount directoryMount protocolLocal/ remoteAvailable compute nodeAccess speedMain purpose or remarks
High-speed domain /lustre1,/lustre2 lustre Remote Accessible from all types of compute nodes High.
Supports highly parallel writing from multiple nodes.
Home directory and scratch area for job output
Power-saving domain Normally none in research purpose nodes NFS,rsync Remote - Low.
Not suited for highly parallel accesses.
Backup purpose for business data and home directory
SSD domain /ssd Direct mount Local Available in SSD- equipped node (xx_ssd.q) Extremely high. Job scratch data storage location (deleted within a certain period); however, it cannot be shared among nodes.

High-speed domain

This comprises the Lustre File System (Lustre), which is a high-speed file system. Lustre is a high-performance file system for large-capacity parallel IO from multiple nodes. NIG Super uses it as the user home directory domain and the output destination for job outputs. However, Lustre does not always give high performance in every case, such as when accessing a large amount of small-sized files (several tens of thousands).

Item nameSetting value
File system capacity 1 PB (2-file system)
Stripe count (system default) 1
Stripe size 1048576
quota size per user 1 TB (expansion possible by application)

By making a computer resource expansion application, it is possible to expand the quota limitation to the desired value. Please apply if you need to. While our policy is to assign capacities to best suit each user’s request, please note that we may have to refuse assigning the requested capacity if the value is exceptional, such as use of 100 TB for several years. Please also note that we check the usage records every fiscal year and may reduce the assigned capacity.

Power-saving domain

This is mainly used for backup and business-related purposes in the home directory, and is not currently open as a work domain that can be directly written from the jobs of general users. The details regarding its configuration are omitted here. Please understand.

SSD domain

The SSD mounted on the SSD-equipped nodes described in the hardware configuration section is mounted and can be used at /ssd on the corresponding nodes. It is extremely useful for jobs that refer to or write in a large number of small files. However, /ssd is not shared by the login node. To utilize this domain for this purpose, it is necessary to copy the data from the home directory before the calculation process in the job script, and to save the results from /ssd to the home directory before job completion when the results are written at /ssd.