High Performance Storage System

HPSS Logo
Incremental Scalability
Based on storage needs and deployment schedules, HPSS scales incrementally by adding computer, network and storage resources. A single HPSS namespace can scale from petabytes of data to exabytes of data, from millions of files to billions of files, and from a few file-creates per second to thousands of file-creates per second.
About HPSS   :    HPSS Big Picture - user interfaces



RHEL Platform   |   Metadata Storage   |   Disk and Tape Storage   |   User Interfaces
Cloud Storage Broker

The Cloud Storage Broker service is the newest interface under development by the HPSS Collaboration. The Cloud Storage Broker service interface is used to store, protect, and error correct project datasets across a wide variety of classic and cloud storage products and services. The SC17 demo showcased storing and retrieving a dataset containing 1,500 1 MB files between Spectrum Scale and LTO-6 plus LTFS. Dataset transfers exceeded 280 MB/s across two data tapes protected by a single parity tape, where each LTO-6 drive exceeded 140 MB/s.

SwiftOnHPSS

SwiftOnHPSS for OpenStack Swift is an S3 interface for HPSS that supports automatic class of service (COS) selection, automatic HPSS end-to-end data integrity support with OpenStack Swift md5 object checksums, and shared access of Swift objects by other HPSS interfaces.

Spectrum Scale

Intended for HPC use, HPSS can be coupled with Spectrum Scale (previously named GPFS) to automatically: copy files from Spectrum Scale to HPSS; purge Spectrum Scale files that are not being used when space thresholds are reached; recall files from HPSS when accessed by Spectrum Scale users; and save a point-in-time snapshot of Spectrum Scale. HPSS for Spectrum Scale allows multiple Spectrum Scale file systems to be managed by a single HPSS.

FUSE

Linux applications benefit from a near-POSIX standard read-write file system interface. This interface enables HPSS to be mounted as a Linux file system in user space (FUSE). Customers are using HPSS FUSE with Open SSL, OpenStack, SaMBa, NFS and Apache.

pFTP & FTP

The high performance Parallel FTP (PFTP) interface moves files in and out of HPSS at high data rates. Standard FTP and high-performance parallel FTP commands are both supported.

HSI & HTAR

The Hierarchical Storage Interface (HSI) provides a familiar UNIX shell-style interface for managing and transferring files. HPSS parallel file transfers are done automatically. Online documentation for HSI is found here. HTAR is a utility for storing groups of files using the POSIX TAR specification and a high performance multithreaded buffering scheme to transfer files directly to and from HPSS. Online documentation for HTAR is found here.

API & PIO

The Client API is the most powerful interface in terms of control, performance, and rich functionality. The HPSS Client API is the foundation of every HPSS interface, and customers have ported open source applications including:
HPSS Client Interface Compatibility Matrix

OS Swift Scale FUSE1 pFTP
FTP
HSI2
Gateway
API3
PIO4
Ubuntu 18.04
X
X
X
SLES 15
X
X
X
RHEL 7 & 8
X
X
X
X
X
X
HPSS 9.1 client interface compatibility matrix
  1. FUSE servers available on Red Hat Enterprise Linux 64-bit kernels.
  2. HSI gateway available on Red Hat Enterprise Linux 64-bit kernels. HSI and HTAR clients run on a number of platforms
  3. HPSS User Interface Client support for operating systems not listed in the table above may be provided by special services. See HPSS Offerings for offering details, and contact us.
  4. The PIO API requires the Client API.

< Home

Come meet with us!
HPSS @ STS 2021
The 3nd Annual Storage Technology Showcase is in the planning stage, but HPSS expects to support the event in March of 2021. Check out their web site - Learn More. We expect an update later in 2020.

HPSS @ ISC21
The 2021 international conference for high performance computing, networking, and storage will be in Frankfurt, Germany from June 27th through July 1st, 2021 - Learn More. Come visit the HPSS folks at the IBM booth and contact us if you would like to schedule a face-to-face meeting with us in Frankfurt.

2021 HUF
The 2021 HPSS User Forum (HUF) is being hosted by Karlsruhe Institute of Technology (KIT) in Karlsruhe, Germany from September 27th through September 30th, 2021. This is a great place to meet HPSS users, collaboration developers and testers (from IBM and DOE Labs), support folks, and leadership. More details coming soon.

HPSS @ SC21 - VIRTUAL
The 2021 international conference for high performance computing, networking, storage and analysis will be in St. Louis, MO from November 15th through 18th, 2021 - Learn More. As we do each year, we are scheduling and meeting with customers via IBM Single Client Briefings. Please contact your local IBM client executive or contact us to schedule a HPSS Single Client Briefing to meet with the IBM business and technical leaders of HPSS.

What's New?
Atos Press Release - Atos boosts Météo-France’s data storage capacity to over 1 exabyte in 2025 to improve numerical modeling and climate predictions. Want to read more?

HUF 2020 - The HPSS User Forum was hosted virtually at no cost in October 2020.

HPSS 9.1 Release - HPSS 9.1 was released on September 24th, 2020 and introduces a few new features.

HPSS 8.3 Release - HPSS 8.3 was released on March 31st, 2020 and introduces one new feature and many minor changes.

HPSS 8.2 Release - HPSS 8.2 was released on December 6th, 2019 and introduces a few new features.

New Globus DSI - Version 2.9 of the HPSS DSI is now available from the GitHub release page. It provides the capability to resume interrupted Globus transfers.

Lots Of Data - In November 2019 IBM/HPSS delivered a system to a customer in Canada and demonstrated a sustained tape ingest rate of 11,574 MB/sec (1 PB/day peak tape ingest) while simultaneously demonstrating a sustained tape recall rate of 8,832 MB/sec (791 TB/day peak tape recall). HPSS pushed four 13-frame IBM TS4500 tape libraries (scheduled to house over 500 PB of tape media) to 2,168 mounts/hour.

HPSS 8.1 Release - HPSS 8.1 was released on October 1st, 2019 and introduces a few new features.

July 2019 - Argonne Team Breaks Record for Globus Data Movement from the Summit supercomputer at Oak Ridge National Laboratory to HPSS tape.

Capacity Leader - ECMWF (European Center for Medium-Range Weather Forecasts) has a single HPSS namespace with over 556 PB spanning over 405 million files.

File-Count Leader - LLNL (Lawrence Livermore National Laboratory) has a single HPSS namespace with over 63 PB spanning 1.525 billion files.

Explosive data growth - HPSS Collaboration leadership from Lawrence Berkeley National Laboratory's National Energy Research Scientific Computing Center (NERSC) helped author the "NERSC Storage 2020" report, and NERSC trusts HPSS to meet their immediate and long term data storage challenges.

Older News - Want to read more?
  • LLNL Logo
  • LLNL Logo
  • NERSC Logo
  • ORNL Logo
  • SANDIA Logo
  • IBM Logo
  • ANL Logo
  • BNL Logo
  • CEA Logo
  • DKRZ Logo
  • ECMWF Logo
  • HLRS Logo
  • IN2P3 Logo
  • IU Logo
  • JAXA Logo
  • KEK Logo
  • NASA LaRC Logo
  • UCAR Logo
  • NOAA NCDC Logo
  • NCSA Logo
  • NCEP Logo
  • PNNL Logo
  • Northrop Grumman Logo
  • SLAC Logo
  • MetOffice Logo
  • SciNet Logo
  • NOAA Logo
Home    |    About HPSS    |    Services    |    Contact us
Copyright 1992 - 2020, HPSS Collaboration. All Rights Reserved.