What is GPFS? Part 2 – High Perform I/O & NSDs

As a follow up to our “What is GPFS?” Part 1 post that discussed GPFS file management tools and its ability to connect different operating systems, we offer Part 2 that outlines how GPFS achieves unparalleled performance for unstructured data.

GPFS Achieves High Performance I/O Via

  • Striping data across multiple disks attached to multiple nodes
  • High performance metadata (inode) scans
  • Supporting a wide range of file system block sizes to match I/O requirements
  • Utilizing advanced algorithms to improve read-ahead and write-behind I/O operations
  • Using block level locking based on a very sophisticated scalable token management system to provide data consistency while allowing multiple application nodes concurrent access to the files

GPFS NSDs

When creating a GPFS file system you provide a list of raw devices and they are assigned to GPFS as Network Shared Disks (NSD). Once a NSD is defined all of the nodes in the GPFS cluster can access the disk, using local disk connection, or using the GPFS NSD network protocol for shipping data over a TCP/IP or InfiniBand connection.

Stay tuned for our next installment which will discuss cluster configurations for GPFS.

Source:  An Introduction to GPFS