High Performance Storage System

High Performance Storage System (HPSS) is a flexible, scalable, policy-based, software-defined Hierarchical Storage Management product developed by the HPSS Collaboration. It provides scalable hierarchical storage management (HSM), archive, and file system services using cluster, LAN and SAN[clarification needed] technologies to aggregate the capacity and performance of many computers, disks, disk systems, tape drives, and tape libraries.[1]

High Performance Storage System
Developer(s)HPSS Collaboration (IBM, LANL, LBNL, LLNL, ORNL, SNL)
Stable release
10.3 / September 2023
Operating systemLinux
TypeHierarchical Storage Management
LicenseProprietary
Websitehpss-collaboration

Architecture edit

HPSS supports a variety of methods for accessing and creating data. Among them are support for FTP, parallel FTP, FUSE (Linux), as well as a robust client API with support for parallel I/O.

As of version 7.5, HPSS has full support on Linux. The HPSS client API is supported on AIX, Linux, and Solaris.[1]

The implementation is built around IBM's Db2, a scalable relational database management system.

The HPSS Collaboration edit

In early 1992, several United States Department of Energy (DOE) National LaboratoriesLawrence Livermore (LLNL), Los Alamos (LANL), Oak Ridge (ORNL), and Sandia (SNL) — joined with IBM to form the National Storage Laboratory (NSL).[2] The NSL's purpose was to commercialize software and hardware technologies that would overcome computing and data storage bottlenecks.[3] The NSL's research on data storage gave birth to the collaboration which produces HPSS. This collaboration began in the fall of 1992[4] and involved IBM's Houston Global Services and five DOE national labs (Lawrence Berkeley [LBL], LLNL, LANL, ORNL, and SNL).[1] At that time, the HPSS design team at the DOE national laboratories and IBM recognized there would be a data storage explosion driven by computing power rising to teraflops/petaflops requiring data stored in HSMs to rise to petabytes and beyond, data transfer rates with the HSM to rise to gigabytes/s and higher, and daily throughput with a HSM in tens of terabytes per day. Therefore, the collaboration set out to design and deploy a system that would scale by a factor of 1,000 or more and evolve from the base above toward these expected targets and beyond.[5]

The HPSS collaboration is based on the premise that no single organization has the experience and resources to meet all the challenges represented by the growing imbalance between computing power and data collection capabilities, and storage system I/O, capacity, and functionality. Over twenty organizations worldwide including industry, US Department of Energy (DOE), other federal laboratories, universities, National Science Foundation (NSF) supercomputer centers, French Commissariat a l'Energie Atomique (CEA), and Gleicher Enterprises have contributed to various aspects of this effort.

As of 2022, the primary HPSS development team consists of:

Notable achievements edit

References edit

  1. ^ a b c "Official HPSS Collaboration Website". IBM.
  2. ^ "High Performance Storage System Taking the Long View". str.llnl.gov. Retrieved 2023-03-29.
  3. ^ Watson, R.W.; Coyne, R.A. (June 1994). "The National Storage Laboratory (NSL): Overview and status". Proceedings Thirteenth IEEE Symposium on Mass Storage Systems. Toward Distributed Storage and Data Management Systems. pp. 39–43. doi:10.1109/MASS.1994.373025. ISBN 0-8186-5580-1. S2CID 206444692.
  4. ^ "HPSS at LLNL". LLNL.
  5. ^ a b Largest HPSS Sites 1+ petabytes
  6. ^ HPCWire Nov 15, 2007 Archived November 17, 2007, at the Wayback Machine
  7. ^ "NCSA puts world's largest High Performance Storage System into production". 2013-05-30. Retrieved 2014-08-30.