Data Center Topbar Image 1
»
UA HPC/HTC/Storage Resources Available for Buy-In Expansion
UA HPC/HTC/Storage Resources Available for Buy-In Expansion

SGI ALTIX UV

Shared memory processors (SMP), optimized for highly parallel, high memory applications, typically using OpenMP programming.  Specialize NumaLink 5 processor and memory interconnects to allow all processors in the system to directly access all processors and all memory in the system, regardless of the cpu where the memory is physically attached.  The system appears as a single set of cpus and memory space to the applications. Due to the special interconnect and memory access capabilities of this system the SMP has the highest 'cost per core' of the systems available.  The numbers of nodes, processors, core/processor, and memory per core for the centrally funded system are listed in the table below.  Buy-in cost estimates are also provided in the table based on the initial, centrally funded configuration and costs.

SGI ALTIX ICE 8400

Distributed memory system (Cluster), optimized for highly parallel applications, typically using MPI programming.  QDR Infiniband processor interconnects to allow all processors to communicate with all processors in the system.  The Cluster has lower costs than the SMP and is the most common type of High Performance Computing (HTC) system used for scientific and engineering parallel computing applications.  The numbers of nodes, processors, core/processor, and memory per core for the centrally funded system are listed in the table below.  Buy-in cost estimates are also provided in the table based on the initial, centrally funded configuration and costs.

IBM IDATAPLEX

High Throughput Computing (HTC), optimized for serial (single core) and small parallel applications.  Independent nodes connected to the network by 1GB Ethernet.  The HTC systems have the lowest cost per core and are the most economical for applications which are not programmed to be highly parallel.  Each node contains 12-core (dual processor, 6-core/processor) and can run a parallel application with a maximum of 12-core, or up to 12 serial applications.  This system is designed for projects that may require 100’s, 1,000’s or 10’s of 1,000’s of small, independent jobs for data processing and analysis typical of image processing and many life science applications.  The numbers of nodes, processors, core/processor, and memory per core for the centrally funded system are listed in the table below.  Buy-in cost estimates are also provided in the table based on the initial, centrally funded configuration and costs.

Initial Centrally Funded Systems Cores Sockets Nodes Description Cores Per Node $ Per Core $ Per Socket $ Per Node
SGI Altix UV 1000 1776 116 58 2.66GHz, dual-processor, 8_core, Xeon Westmere EX, Numalink5 16 $678 $85 $10,851
SGI Altix ICE 8400 2748 458 229 2.66GHz, dual-processor, 6_core, Xeon Westmere EP, QDR Infiniband 12 $369 $62 $4,433
IBM iDataPlex 1248 208 104 2.66GHz, dual-processor, 6_core, Xeon Westmere EP, 1Gb Ethernet 12 $259 $43 $3,106

DDN SFA10000 STORAGE

High capacity and high performance, parallel storage array with NFS and GPFS connections to UA HPC and HTC systems via 10GB Ethernet and QDR Infiniband.  Initial, centrally funded array contains 350 TB of storage (350 TB raw, 280 TB formatted with redundancy).  The costs for adding TB’s of storage to the DDN array are listed in the table below.

DDN Storage SFA 10000 Hdd Qty TB $ Cost $/TB Raw $/TB Formatted
2TB SATA 10 2.0 $3,550 $178 $222
600GB SAS 10 0.6 $5,833 $972 $1,215

 

 

 


Site map: http://rc.arizona.edu/sitemap