High Performance Storage System

HPSS Logo
Incremental Scalability
Based on storage needs and deployment schedules, HPSS scales incrementally by adding computer, network and storage resources. A single HPSS namespace can scale from petabytes of data to exabytes of data, from millions of files to billions of files, and from a few file-creates per second to thousands of file-creates per second.
About HPSS   :    HPSS Big Picture - user interfaces



RHEL Platform   |   Metadata Storage   |   Disk and Tape Storage   |   User Interfaces
HPSS Storage Broker

HPSS Storage Broker is used to store, protect, and error correct project datasets across a wide variety of archive storage including public and private S3 object stores, file systems and HPSS. The limited availability release of HPSS Storage Broker was provided to HPSS customers upon request in 4Q 2020, and will be made generally available in 3Q 2021.

SwiftOnHPSS

SwiftOnHPSS for OpenStack Swift is an S3 interface for HPSS that supports automatic class of service (COS) selection, automatic HPSS end-to-end data integrity support with OpenStack Swift md5 object checksums, and shared access of Swift objects by other HPSS interfaces.

Spectrum Scale

Intended for HPC use, HPSS can be coupled with Spectrum Scale (previously named GPFS) to automatically: copy files from Spectrum Scale to HPSS; purge Spectrum Scale files that are not being used when space thresholds are reached; recall files from HPSS when accessed by Spectrum Scale users; and save a point-in-time snapshot of Spectrum Scale. HPSS for Spectrum Scale allows multiple Spectrum Scale file systems to be managed by a single HPSS.

FUSE

Linux applications benefit from a near-POSIX standard read-write file system interface. This interface enables HPSS to be mounted as a Linux file system in user space (FUSE). Customers are using HPSS FUSE with Open SSL (encrypted file transfer solution), MinIO (S3 object storage solution), OpenStack (object storage solution), SaMBa (MS Windows file sharing), NFS (POSIX file sharing), DSpace (restful open digital repository solution), and Bacula (site backup solutions).

pFTP & FTP

The high performance Parallel FTP (PFTP) interface moves files in and out of HPSS at high data rates. Standard FTP and high-performance parallel FTP commands are both supported.

HSI & HTAR

The Hierarchical Storage Interface (HSI) provides a familiar UNIX shell-style interface for managing and transferring files. HPSS parallel file transfers are done automatically. HTAR is a utility for storing groups of files using the POSIX TAR specification and a high performance multithreaded buffering scheme to transfer files directly to and from HPSS. Online documentation for HSI and HTAR is found here.

API & PIO

The Client API is the most powerful interface in terms of control, performance, and rich functionality. The HPSS Client API is the foundation of every HPSS interface, and customers have ported open source applications including:
HPSS Client Interface Compatibility Matrix

OS Swift Scale FUSE1 pFTP
FTP
HSI2
Gateway
API3
PIO4
Ubuntu 18.04
X
X
X
SLES 15
X
X
X
RHEL 7 & 8
X
X
X
X
X
X
HPSS 9.1 client interface compatibility matrix
  1. FUSE servers available on Red Hat Enterprise Linux 64-bit kernels.
  2. HSI gateway available on Red Hat Enterprise Linux 64-bit kernels. HSI and HTAR clients run on a number of platforms
  3. HPSS User Interface Client support for operating systems not listed in the table above may be provided by special services. See HPSS Offerings for offering details, and contact us.
  4. The PIO API requires the Client API.

< Home

Come meet with us!
2021 HUF - VIRTUAL
COVID-19 has disrupted the 2021 HPSS User Forum (HUF) and the Karlsruhe Institute of Technology (KIT) in Karlsruhe, Germany is no longer hosting the event. The 2021 HUF will be hosted online for six days spread across three weeks in October 2021 with no admission cost. This will be a great opportunity to hear from HPSS users, collaboration developers, testers, support folks and leadership (from IBM and DOE Labs) - Learn More. Please contact us if you are not a customer but would like to attend.

HPSS @ SC21
The 2021 international conference for high performance computing, networking, storage and analysis will be in St. Louis, MO from November 15th through 18th, 2021 - Learn More. As we do each year, we are scheduling and meeting with customers via IBM Single Client Briefings. Please contact your local IBM client executive or contact us to schedule a HPSS Single Client Briefing to meet with the IBM business and technical leaders of HPSS.

HPSS @ STS 2022
The 4th Annual Storage Technology Showcase is in the planning stage, but HPSS expects to support the event in March of 2022. Check out their web site - Learn More. We expect an update in early fall 2021.

HPSS @ MSST 2022
The 37th International Conference on Massive Storage Systems and Technology will be in Santa Clara, California in May of 2022 - Learn More. Please contact us if you would like to meet with the IBM business and technical leaders of HPSS at Santa Clara University.

What's New?
DOE Announces HPSS Milestone - Todd Heer, Deputy Program Lead, Advanced Simulation and Computing (ASC) Facilities, Operations, and User Support (FOUS), announced that DOE High Performance Storage Systems (HPSS) eclipse one exabyte in stored data.

Atos Press Release - Atos boosts Météo-France’s data storage capacity to over 1 exabyte in 2025 to improve numerical modeling and climate predictions. Want to read more?

HPSS 9.2 Release - HPSS 9.2 was released on May 11th, 2021 and introduces eight new features and numerous minor updates.

HPSS 9.1 Release - HPSS 9.1 was released on September 24th, 2020 and introduces a few new features.

HUF 2020 - The HPSS User Forum was hosted virtually at no cost in October 2020.

HPSS 9.1 Release - HPSS 9.1 was released on September 24th, 2020 and introduces a few new features.

HPSS 8.3 Release - HPSS 8.3 was released on March 31st, 2020 and introduces one new feature and many minor changes.

Capacity Leader - ECMWF (European Center for Medium-Range Weather Forecasts) has a single HPSS namespace with over 567 PB spanning over 399 million files.

File-Count Leader - LLNL (Lawrence Livermore National Laboratory) has a single HPSS namespace with over 65 PB spanning 1.540 billion files.

Older News - Want to read more?
  • LLNL Logo
  • LLNL Logo
  • NERSC Logo
  • ORNL Logo
  • SANDIA Logo
  • IBM Logo
  • ANL Logo
  • BNL Logo
  • CEA Logo
  • DKRZ Logo
  • ECMWF Logo
  • HLRS Logo
  • IN2P3 Logo
  • IU Logo
  • JAXA Logo
  • KEK Logo
  • NASA LaRC Logo
  • NASA ASDC Logo
  • UCAR Logo
  • NOAA NCDC Logo
  • NOAA Logo
  • NCEP Logo
  • PNNL Logo
  • SLAC Logo
  • MetOffice Logo
  • SciNet Logo
  • SSC Logo
  • UTAS Logo
Home    |    About HPSS    |    Services    |    Contact us
Copyright 1992 - 2021, HPSS Collaboration. All Rights Reserved.