GPFS Storage Sever and Elastic Storage Server

Software Defined Storage Building Block Architecture for GPFS

Designed to meet the most demanding Big Data & HPC applications, Perseus is a software-defined storage solution which uses standard commodity hardware and implements important storage and management functions such as ILM, disk caching, snapshots, replication, striping, and clustering through intelligent software.

GPFS Storage Server & Elastic Storage Server
Key Features
  • Single, integrated storage solution
  • Built to leverage a strong General Parallel File System (GPFS) software market
  • High-capacity, scalable building-block approach- performance and capacity increase as you add multipe building blocks
  • Unique, innovative GPFS-Native RAID capability provides extreme data integrity and reduced latency with faster rebuild times, and enhanced data protection
  • Cost competitive

Perseus combines the performance of System x86 or P8 servers with the GPFS file system to offer a high-performance, scalable building-block approach to modern storage needs. This solution is integrated at the plant and comes with a comprehensive warranty service for all components. The core of the GSS / ESS is the GPFS file system, which provides unmatched performance and reliability with scalable access to critical file data, storage management, information life cycle management tools, centralized administration and shared access.

Perseus GSS combines native RAID technology with commercial JBODS and X86 or P8 servers into a storage building block that scales in both capacity and performance, all at an affordable price.

Key Features of GPFS Native Raid

  • Software RAID: GPFS Native RAID runs on standard Linux disks in a dual-ported JBOD array, which does not require external RAID storage controllers or other custom hardware RAID acceleration.
  • Declustering: GPFS Native RAID distributes client data, redundancy information, and spare space uniformly across all disks of a JBOD. This distribution reduces the rebuild (disk failure recovery process) overhead compared to conventional RAID.
  • Checksum: An end-to-end data integrity check, using checksums and version numbers, is maintained between the disk surface and NSD clients. The checksum algorithm uses version numbers to detect silent data corruption and lost disk writes.
  • Data redundancy: GPFS Native RAID supports highly reliable 2-fault-tolerant and 3-fault-tolerant Reed- Solomon based parity codes and 3-way and 4-way replication.
  • Large cache: A large cache improves read and write performance, particularly for small I/O operations.
  • Arbitrarily sized disk arrays: The number of disks is not restricted to a multiple of the RAID redundancy code width, which allows flexibility in the number of disks in the RAID array.
  • Multiple redundancy schemes: One disk array can support vdisks with different redundancy schemes, for example Reed-Solomon and replication codes.
  • Disk Hospital: A disk hospital asynchronously diagnoses faulty disks and paths, and requests replacement of disks by using past health records.
  • Automatic recovery: Seamlessly and automatically recovers from primary server failure. Disk scrubbing: A disk scrubber automatically detects and repairs latent sector errors in the background.
  • Familiar interface: Standard GPFS command syntax is used for all configuration commands; including, maintaining and replacing failed disks.
  • Configuration and data logging: Internal configuration and small-write data are automatically logged to solid-state disks for improved performance.