RAID Inc. was recently awarded a contract to provide Lawrence Livermore National Laboratory (LLNL) a custom parallel file system solution for its unclassified computing environment.
RAID will deliver a 17PB file system able to sustain up to 180 GB/s. These high performance, cost-effective solutions are designed to meet LLNL’s current and future demands for parallel access data storage.
Complete Press Release
Lawrence Livermore National Laboratory selects RAID Inc. for new parallel file system in its high performance computing center.
1. The parallel file system will run Lustre 2.8 with ZFS OSDs and multiple metadata servers.
2. The Lustre file system contains 36 OSS nodes, with each node capable of 5 gigabytes per second of sustained data performance, and 16 metadata servers with 25TB of SSD storage capacity.
3. The solution is anchored by enterprise class 4U 84 bay 12G SAS JBODs, LSI/Avago 12G SAS adapters, Mellanox EDR InfiniBand, HGST 12G Enterprise SAS disk drives, and Intel server technologies.
4. The file system incorporates 6 scalable storage units each containing six Lustre OSS and six 4U-84Bay JBODs with 480-8TB SAS drives. The solution will be employing ZFS on Linux with raidz2 data parity protection. Resiliency is provided by multipath and high-availability failover connectivity, intended to eliminate single points of failure.
5. An additional software feature was added to manipulate tunable features and settings on disks in the same way RAID controller manufacturers fine-tune disk firmware for their enclosures. It not only squeezes every bit of performance out of the drives but also provides extensive diagnostic reporting in order to catch and potentially fix problems long before they affect data flow and integrity.
6. LLNL’s HPC facility consists of numerous computer platforms and file systems spanning multiple buildings and operating at multiple security levels.