High Performance Storage System

HPSS Logo
Incremental Scalability
Based on storage needs and deployment schedules, HPSS scales incrementally by adding computer, network and storage resources. A single HPSS namespace can scale from petabytes of data to exabytes of data, from millions of files to billions of files, and from a few file-creates per second to thousands of file-creates per second.
About HPSS   :    HPSS Releases   :    HPSS Version 7.5.3

The following is an overview of some of the new features that can be found in HPSS 7.5.3.

New to HPSS 7.5.3

Improved mount rate with Spectra Logic T-Finity robotics

The SCSI PVR now takes advantage of several Spectra Logic T-Finity features for improved mount throughput. This is available only in the hpss_scsi_pvr_beta executable (see the Management Guide for more information). The SCSI PVR now handles multiple zones within the Spectra Logic T-Finity robot and prioritizes movement within the same terapack.

Fileset Conversion Tool

HPSS now provides a tool for merging one fileset into a second fileset without copying data. Instead, the change is a namespace operation. The first fileset is removed, and all of its subdirectories, files, and other namespace objects move to the second fileset. The root directory of the first fileset becomes a subdirectory within the second fileset.

See the nsde man page for details and usage.

Fileset metadata helper now runs automatically upon startup

The “hpss_db2_bindall” program will now automatically be invoked with rc. The hpss_db2_bindall program facilitates communication for the Core Server between databases for fileset operations.

Quaid: Enhancements for Full Aggregate Recall (FAR) and Logical Offset Ordering

Quaid has been enhanced to sort stage requests into batches by logical tape offset. Quaid has also been enhanced to selectively enable FAR or disable FAR based on the number of files which appear to be in the same aggregate.

See the Quaid man page for details.

Common HPSS tools now provide output in JSON format

JSON output can help make it easier to use HPSS information in other fora. JSON can easily be programmatically accessed via python and other languages. HPSS now provides JSON output support for rtmu, dump_acct_sum, lshpss, dumppv_pvl, dumppv_pvr, and lspvhist.

See their respective man pages for more information.

Improved Mover Logging

Mover logging has been improved to better indicate why a drive was marked suspect or disabled by the system. Mover logging now also logs SCSI diagnostic information for tape mark write failures.

< Home

Come meet with us!
HPSS @ ISC 2022
ISC 2022 is the event for high performance computing, machine learning, and data analytics, and will be in Frankfurt, Germany from May 29th through June 2nd, 2022 - Learn More. As we have done each year (pre-pandemic), we are scheduling and meeting with folks attending the conference. Please contact us to schedule a HPSS Single Client Briefing to meet with the IBM business and technical leaders of HPSS.

HPSS @ STS 2022 - postponed
The 4th Annual Storage Technology Showcase is in the planning phase, but HPSS expects to support the event later this year. Check out their web site - Learn More.

HPSS @ MSST 2022
The 37th International Conference on Massive Storage Systems and Technology will be in Santa Clara, California in May of 2022 - Learn More. Please contact us if you would like to meet with the IBM business and technical leaders of HPSS at Santa Clara University.

2022 HUF
The 2022 HPSS User Forum (HUF) is in the planning phase. The 2021 HUF was hosted online for six days spread across three weeks in October 2021 with no admission cost. We are planning to meet in person this year. Please check back next quarter for details. This will be a great opportunity to hear from HPSS users, collaboration developers, testers, support folks and leadership (from IBM and DOE Labs). Please contact us if you are not a customer but would like to attend.

HPSS @ SC22
The 2022 international conference for high performance computing, networking, storage and analysis will be in Dallas, TX from November 14th through 17th, 2022 - Learn More. As we have each year (pre-pandemic), we are scheduling and meeting with customers via IBM Single Client Briefings. Please contact your local IBM client executive or contact us to schedule a HPSS Single Client Briefing to meet with the IBM business and technical leaders of HPSS.

What's New?
HPSS 9.3 Release - HPSS 9.3 was released on December 14th, 2021 and introduces eight new features and numerous minor updates.

HUF 2021 - The HPSS User Forum was hosted virtually at no cost in October 2021.

DOE Announces HPSS Milestone - Todd Heer, Deputy Program Lead, Advanced Simulation and Computing (ASC) Facilities, Operations, and User Support (FOUS), announced that DOE High Performance Storage Systems (HPSS) eclipse one exabyte in stored data.

Atos Press Release - Atos boosts Météo-France’s data storage capacity to over 1 exabyte in 2025 to improve numerical modeling and climate predictions. Want to read more?

HPSS 9.2 Release - HPSS 9.2 was released on May 11th, 2021 and introduces eight new features and numerous minor updates.

Capacity Leader - ECMWF (European Center for Medium-Range Weather Forecasts) has a single HPSS namespace with over 650 PB spanning over 439 million files.

File-Count Leader - LLNL (Lawrence Livermore National Laboratory) has a single HPSS namespace with over 70 PB spanning 1,638 billion files.

Older News - Want to read more?
  • LLNL"
  • LANL"
  • NERSC"
  • ORNL"
  • Sandia"
  • IBM"
  • ANL"
  • Boeing"
  • BNL"
  • CEA"
  • CNES"
  • DWD"
  • DKRZ"
  • ECMWF"
  • PNNL
  • HLRS"
  • IU"
  • IITM"
  • IN2P3"
  • JAXA"
  • KEK"
  • KIT"
  • Met
  • MPCDF"
  • Meteo
  • NASA
  • NASA
  • NCMRWF"
  • NOAA
  • NOAA
  • NOAA
  • NOAA
  • Purdue"
  • SciNet"
  • SSC"
  • SLAC"
  • UTAS"
Home    |    About HPSS    |    Services    |    Contact us
Copyright 1992 - 2021, HPSS Collaboration. All Rights Reserved.