High Performance Storage System

HPSS Logo
Incremental Scalability
Based on storage needs and deployment schedules, HPSS scales incrementally by adding computer, network and storage resources. A single HPSS namespace can scale from petabytes of data to exabytes of data, from millions of files to billions of files, and from a few file-creates per second to thousands of file-creates per second.
About HPSS   :    HPSS Releases   :    New to HPSS Version 10.1.0

Improved Spectra Logic Terapacks support

HPSS groups tape moves belonging to the same Spectra Logic Terapack for improved mount and dismount performance.

Visualization Enhancements with Kibana

The approach that HPSS has taken to monitoring is to facilitate integration by providing a suite of tools which can be used by administrators to gather information about HPSS into their preferred workflow. A number of additional tools and logging information have been added, which are documented here, to present raw statistical information captured by HPSS.

A site which does not have existing visualization infrastructure for HPSS can use our provided Kibana dashboards to quickly obtain useful HPSS monitoring. However, a site which does have existing visualization infrastructure might benefit from additional sources of data and integrate it into their monitoring.

Beginning in 10.1, HPSS is delivering templates for use with Kibana to provide more trend views. This includes server status, core server statistics, file counts and bytes by class of service or storage class, tape mounts, transfers by device and mover, and trashcan, migration, and purge statistics over time.

See the section of the HPSS Installation Guide for more information.

Low overhead read interface (lori)

lori is a new tool which behaves similarly to quaid and shares many of the same options including organizing requests by tape, callouts upon read completion, filtering, and more. However, while quaid stages files up to the top level of the hierarchy, lori uses PIO to read data out of HPSS with minimal overhead and into the selected output directory. See the lori man page for more details.

In the future, lori will take advantage of upcoming changes to enable mass recall with read recovery.

HPSS now supports 65535 alternate groups

HPSS now supports users belonging to 65535 alternate groups, up from 64.

Restricted Access

HPSS now provides finer-grained access controls than the “Restricted User” feature. The “Restricted User” feature was an on/off switch that granted or disabled access to HPSS APIs for particular users. Restricted Access replaces Restricted User as the method for controlling HPSS client access.

Restricted Access allows for a site to restrict access to specific operations by user. Restricted Access duplicates the behavior of the Restricted User feature, but also provides finer-grained control by operation such as restricting copies, creates, writes, and stages.

See the HPSS Management Guide for more information.

Files now show the type of media in extended attributes

Extended attribute calls (e.g. hpss_FileGetXAttributes) which retrieve level information now include the type of the media, for each tape reported. The media type can be converted to a more useful string with hpss_MediaTypeString(). See the HPSS Programmers Reference 
for more details.

dump_acct_sum will now dump the bandwidth table

dump_acct_sum will now present the bandwidth table (bytes read and written) by account/UID/GID/COS using the -b option. See the man page for more information.

dumpv_pvl now displays the HPSS label format

dumppv_pvl now outputs the tape label format for each volume.

See the man page for more information.

lscos and lsvol now support JSON output

lscos and lsvol now support outputting in JSON format with -j.

See the tools' respective man pages for details.

Toggle RAO on/off per tape device

RAO can now be toggled on/off for each tape device. The default is ON.

See the HPSS Management Guide for more information.

Repack will log media access INFO logs

Repack now logs information about source and destination tapes involved in repack operations as INFO logs. Individual file access is not logged for repack.

HPSS server metrics tool

A new tool has been created, hpss_server_metrics. This tool provides system statistics which previously could only be gathered through a variety of other tools and APIs.

The server metrics tool provides statistics in both human-readable and JSON format, and can also reset statistics for certain reports to generate interval-based statistics.

See the hpss_server_metrics man page for more information.

HPSS DB metrics tool

A new tool has been created, hpss_db_metrics. This tool provides database statistics which previously could only be gathered through out of band database queries.

The db metrics tool provides statistics in both human-readable and JSON format.

See the hpss_db_metrics man page for more information.

HPSS hpssmsg tool

The hpssmsg tool, which was written in C, has been rewritten in Python. It adds support for several new arguments to give users more flexibility in generating the message. It also removes support for the -u argument which was used to affect the message ID displayed in the message. The new hpssmsg adds a -i argument to let the user specify the message ID.

See the hpssmsg man page for more information.

< Home

Come meet with us!
HPSS @ MSST 2023
The 37th International Conference on Massive Storage Systems and Technology will be in Santa Clara, California in May of 2023 - Learn More. Please contact us if you would like to meet with the IBM business and technical leaders of HPSS at Santa Clara University.

HPSS @ ISC 2023
ISC 2023 is the event for high performance computing, machine learning, and data analytics, and will be in Hamburg, Germany from May 21st through May 25th, 2023 - Learn More. As we have done each year (pre-pandemic), we are scheduling and meeting with folks attending the conference. Please contact us meet with the IBM business and technical leaders of HPSS.

2023 HUF
The 2023 HPSS User Forum (HUF) will be an in-person event scheduled October 23-27, 2023, in Tucson, AZ or perhaps Washington, DC. Please check back for registration details. This will be a great opportunity to hear from HPSS users, collaboration developers, testers, support folks and leadership (from IBM and DOE Labs). Please contact us if you are not a customer but would like to attend.

HPSS @ SC23
The 2023 international conference for high performance computing, networking, storage and analysis will be in Denver, CO from November 12th through 17th, 2023 - Learn More. As we have each year (pre-pandemic), we are scheduling and meeting with customers via IBM Single Client Briefings. Please contact your local IBM client executive or contact us to schedule a HPSS Single Client Briefing to meet with the IBM business and technical leaders of HPSS.

HPSS @ STS 2024
The 5th Annual Storage Technology Showcase is in the planning phase, but HPSS expects to support the event. Check out their web site - Learn More.

What's New?
HPSS 10.2 Release - HPSS 10.2 was released on February 16th, 2023 and introduces six new features and numerous minor updates.

Celebrating 30 Years - 2022 marks the 30th anniversary of the High Performance Storage System (HPSS) Collaboration.

HPSS 10.1 Release - HPSS 10.1 was released on September 30th, 2022 and introduces fourteen new features and numerous minor updates.

Lots of Data - In March 2022, IBM/HPSS delivered a storage solution to a customer in Canada, and demonstrated a sustained tape ingest rate of 33 GB/sec (2.86 PB/day peak tape ingest x 2 for dual copy), while simultaneously demonstrating a sustained tape recall rate of 24 GB/sec (2.0 PB/day peak tape recall). HPSS pushed six 18-frame IBM TS4500 tape libraries (scheduled to house over 1.6 Exabytes of tape media) to over 3,000 mounts/hour.

HPSS 9.3 Release - HPSS 9.3 was released on December 14th, 2021 and introduces eight new features and numerous minor updates.

HUF 2021 - The HPSS User Forum was hosted virtually at no cost in October 2021.

DOE Announces HPSS Milestone - Todd Heer, Deputy Program Lead, Advanced Simulation and Computing (ASC) Facilities, Operations, and User Support (FOUS), announced that DOE High Performance Storage Systems (HPSS) eclipse one exabyte in stored data.

Atos Press Release - Atos boosts Météo-France’s data storage capacity to over 1 exabyte in 2025 to improve numerical modeling and climate predictions. Want to read more?

HPSS 9.2 Release - HPSS 9.2 was released on May 11th, 2021 and introduces eight new features and numerous minor updates.

Capacity Leader - ECMWF (European Center for Medium-Range Weather Forecasts) has a single HPSS namespace with over 778 PB spanning over 523 million files.

File-Count Leader - LLNL (Lawrence Livermore National Laboratory) has a single HPSS namespace with over 77 PB spanning 1.735 billion files.

Older News - Want to read more?
  • LLNL
  • LANL
  • NERSC
  • ORNL
  • Sandia
  • IBM
  • ANL
  • Boeing
  • BNL
  • CEA
  • CNES
  • DWD
  • ECMWF
  • PNNL EMSL
  • HLRS
  • IU
  • IITM
  • IN2P3
  • JAXA
  • KEK
  • KIT
  • Met Office
  • MPCDF
  • Meteo France
  • NASA ASDC
  • NASA LaRC
  • Nasjonalbiblioteket
  • NCMRWF
  • NOAA CLASS
  • NOAA NCEI
  • NOAA R&D
  • Purdue
  • SciNet
  • SSC
  • SLAC
  • UTAS
Home    |    About HPSS    |    Services    |    Contact us
Copyright 1992 - 2021, HPSS Collaboration. All Rights Reserved.