test

RA.MINES.EDU - Overview

The Colorado School of Mines (CSM) Golden Energy Computing Organization acquired the high performance computing (HPC) cluster, RA.Mines.Edu, to offer a new dimension of capability to research in the energy sciences. This facility is a national hub for computational inquiries aimed at the discovery of new ways to meet the energy needs of our society. As the performance of leadership computing facilities continue to advance, it will become increasingly vital to invest in such discipline specific nodes to bridge between the top tier platforms and smaller clusters.

Acknowledgment

This research was supported in part by the Golden Energy Computing Organization at the Colorado School of Mines using resources acquired with financial assistance from the National Science Foundation and the National Renewable Energy Laboratory.

Hardware

  • 2144 processing cores in 268 nodes
    • 256 nodes with 512 Clovertown E5355 (2.67 GHz)
      • (quad core dual socket)
      • 184 with 16 Gbytes & 72 with 32 Gbytes
    • 12 nodes with 48 Xeon 7140M (3.4 GHz)
      • (quad socket dual core)
      • 32 Gbytes each
  • Memory
    • 5,632 Gbytes ram (5.6 terabytes)
    • 300 terabyte disk
    • 16/32 gigabytes RAM per node
  • Performance
    • 17 teraflop sustained performance
    • 23 teraflop peak performance

Admin Nodes: 2 total

  • PE2950 servers
  • E53352 quad-core Xeons

Management Server: 1 total

  • PE1950 server
  • 5130 dual-core Xeons

Nodes/Footprint: Dual/quad-socket racks, 12 cabinets

Interconnect: IB, DDR, Cisco Cisco SFS 7024 IB Server Switch - 48 Port Std Configuration with additional 12 Port Cards to support up to 288 ports of 4X DDR IB Host Channel Adaptors. Each compute node will host one DDR 1-Port HCA (4X IB, PCIex8, 0MB, Tall Bkt) [LINUX]. InfiniBand to Fiber Channel Gateway is provided by Data Direct Networks. Measured performance for processes running on different nodes using a MPI "ping pong" program ppong.c: 1.3 Gbytes/second with a latency of 3.46 micorseconds. Measurements were taken with the machine fully loaded. Administrative network is provisioned by Gigabit Ethernet resident on each motherboard and Dell Power Connect Managed 6248 GigE Switches.

Disk Storage: Data Direct Networks

  • Scratch~200 TB raw, Primary ~100 TB raw, Tape ~300 TB raw
  • 2 S2A9550s with 10 48 Bay JBODs and 400 750GB SATA drives
  • 300TB Raw, 240TB Usable
  • Bandwidth is between 3 and 4GB/sec depending on I/O
  • Lustre File System is implemented and supported by DDN

Tape Archive:

  • 300TB
  • Power Vault ML6030
  • The tape archive is used to backup HOME directories.
  • Scratch directories are not backed up.

Questions

Contact Dr. Timothy Kaiser tkaiser@mines.edu.