High Performance Storage System

HPSS Logo
Incremental Scalability
Based on storage needs and deployment schedules, HPSS scales incrementally by adding computer, network and storage resources. A single HPSS namespace can scale from petabytes of data to exabytes of data, from millions of files to billions of files, and from a few file-creates per second to thousands of file-creates per second.
About HPSS   :    HPSS Releases   :    New to HPSS Version 9.3.0

Improved Spectra Logic Terapacks support

HPSS groups tape moves belonging to the same Spectra Logic Terapack for improved mount and dismount performance.

Externalize disk mover and disk volume I/O metrics

The HPSS Core Server's Storage Service maintains metrics about how busy disk movers and disk VVs are. These metrics measure how many I/O tasks (reads and writes) are currently running against the disk movers and their disk VVs. These metrics are used to load-balance disk allocations across the available movers and VVs.

Prior to HPSS 9.3, these disk mover and VV I/O metrics were entirely internal to the Core Server. As of HPSS 9.3, the metrics can be viewed externally in the Core Server state dump and via the `showdiskmaps` tool. This can be especially useful during debugging sessions or when investigating disk load-balancing problems.

The new functionality consists of two related features:

A log of all the allocations, reads, and writes and how they impact the metrics. This info is available in the Core Server state dump — obtained via sending `SIGHUP` to the `hpss_core` process.

A point-in-time snapshot of disk and mover I/O metrics. This is available in the `showdiskmaps` tool's output. NOTE: the version of `showdiskmaps` provided with HPSS 9.3 will *not* be able to read disk free space map snapshot files from prior versions of HPSS. See the relevant disk VV I/O metrics section of the HPSS Management Guide for more details.

Command line tool for aborting I/O jobs

Add a command line tool for aborting I/O requests.

Add the capability for PFTP clients to terminate in-progress I/O requests

Add the ability to cancel parallel transfers in PFTP.

Add support for deferred tape labeling on import

When the 'Defer Labeling' option is selected, tape(s) in this import will not be labeled and will instead be labeled when they are first used for a write.

Improve mover performance when staging files within an aggregate

The Mover has a new positioning mode, which enables it to skip verifying tape section headers and position directly to requested data. Reads beginning at the beginning of the section will still verify the tape section header.

Verifying the section header provides additional assurance to HPSS that the Mover has correctly positioned, however, modern tape drives have an excellent track record of correctly positioning the media. When reading multiple files within aggregates, or issuing small file reads within a large file on tape, header verification during positioning can take significant time and generate additional wear on media.

By default, this mode is OFF and positioning continues to work as it has previously.

To begin using the fast positioning mode, set the “Fast Positioning” flag in the tape device to ON via SSM.

Another setting that modifies this behavior is HPSS_MVR_LOCATE_BLK_THRESH (default, 0). This setting causes the Mover to go ahead and verify the tape header position so long as the requested position is within some number of blocks of the tape header. For example, if HPSS_MVR_LOCATE_BLK_THRESH is set to 100, the header will be verified if the block being positioned to is within 100 blocks of the header. This is useful, for example, to allow the header position to occur in a scenario where the tape header was very close to the requested data in order to have a finer grained caution / performance trade-off, if desired. After altering the environment variable, the Mover must be restarted for any changes to take effect. The effect of this is node-wide.

Replace HPSS GAS library with industry-standard GSS

There is a critical step that must be followed for continued use of HPSS UNIX when the checkin is done: You must copy config/templates/system-files/mech.hpss.conf to /etc/gss/mech.d/

FTP no longer has a distinct set of users from HPSS. This led to possible issues where an FTP user may be presented with undue privileges if the UID chosen for the FTP user which coincides with a privileged HPSS user's UID. In addition, HPSS logging of users would print wrong or unknown usernames when given an FTP user which did not exist in the HPSS database, or used the UID of a HPSS user.

Because of the unification of FTP and HPSS users, hpssuser has been modified to take only one account type, -acct. In addition, hpssuser now uses site.conf to understand which methods of authentication and authorization are in use for the realm. Therefore -unix, -krb, and -ldap have also been removed.

Update the build process to create most of the RPMs from a single spec file

This bug updates the RPATH for executable and shared object files to no longer contain directories under /opt/hpss. Thus, when users install the application with RPMs, they should not rename the true directory where HPSS is installed and run from the renamed directory, even if they link this directory to /opt/hpss. Users may continue to link HPSS to the /opt/hpss directory and run from there. The operating system will be able to locate what it needs from the RPATH in the directories where the RPMs installed them.

< Home

Come meet with us!
2022 HUF
The 2022 HPSS User Forum (HUF) will be an in-person event scheduled October 24-28, 2022, in Houston, TX. Please check back for registration details. This will be a great opportunity to hear from HPSS users, collaboration developers, testers, support folks and leadership (from IBM and DOE Labs). Please contact us if you are not a customer but would like to attend.

HPSS @ SC22
The 2022 international conference for high performance computing, networking, storage and analysis will be in Dallas, TX from November 14th through 17th, 2022 - Learn More. As we have each year (pre-pandemic), we are scheduling and meeting with customers via IBM Single Client Briefings. Please contact your local IBM client executive or contact us to schedule a HPSS Single Client Briefing to meet with the IBM business and technical leaders of HPSS.

HPSS @ STS 2023
The 4th Annual Storage Technology Showcase has been postponed, but HPSS expects to support the event when it returns. Check out their web site - Learn More.

HPSS @ MSST 2023
The 37th International Conference on Massive Storage Systems and Technology will be in Santa Clara, California in May of 2023 - Learn More. Please contact us if you would like to meet with the IBM business and technical leaders of HPSS at Santa Clara University.

HPSS @ ISC 2023
ISC 2023 is the event for high performance computing, machine learning, and data analytics, and will be in Hamburg, Germany from May 21st through May 25th, 2023 - Learn More. As we have done each year (pre-pandemic), we are scheduling and meeting with folks attending the conference. Please contact us meet with the IBM business and technical leaders of HPSS.

What's New?
Celebrating 30 Years - 2022 marks the 30th anniversary of the High Performance Storage System (HPSS) Collaboration.

HPSS 10.1 Release - HPSS 10.1 was released on September 30th, 2022 and introduces fourteen new features and numerous minor updates.

Lots of Data - In March 2022, IBM/HPSS delivered a storage solution to a customer in Canada, and demonstrated a sustained tape ingest rate of 33 GB/sec (2.86 PB/day peak tape ingest x 2 for dual copy), while simultaneously demonstrating a sustained tape recall rate of 24 GB/sec (2.0 PB/day peak tape recall). HPSS pushed six 18-frame IBM TS4500 tape libraries (scheduled to house over 1.6 Exabytes of tape media) to over 3,000 mounts/hour.

HPSS 9.3 Release - HPSS 9.3 was released on December 14th, 2021 and introduces eight new features and numerous minor updates.

HUF 2021 - The HPSS User Forum was hosted virtually at no cost in October 2021.

DOE Announces HPSS Milestone - Todd Heer, Deputy Program Lead, Advanced Simulation and Computing (ASC) Facilities, Operations, and User Support (FOUS), announced that DOE High Performance Storage Systems (HPSS) eclipse one exabyte in stored data.

Atos Press Release - Atos boosts Météo-France’s data storage capacity to over 1 exabyte in 2025 to improve numerical modeling and climate predictions. Want to read more?

HPSS 9.2 Release - HPSS 9.2 was released on May 11th, 2021 and introduces eight new features and numerous minor updates.

Capacity Leader - ECMWF (European Center for Medium-Range Weather Forecasts) has a single HPSS namespace with over 712 PB spanning over 474 million files.

File-Count Leader - LLNL (Lawrence Livermore National Laboratory) has a single HPSS namespace with over 75 PB spanning 1.685 billion files.

Older News - Want to read more?
  • LLNL"
  • LANL"
  • NERSC"
  • ORNL"
  • Sandia"
  • IBM"
  • ANL"
  • Boeing"
  • BNL"
  • CEA"
  • CNES"
  • DWD"
  • DKRZ"
  • ECMWF"
  • PNNL
  • HLRS"
  • IU"
  • IITM"
  • IN2P3"
  • JAXA"
  • KEK"
  • KIT"
  • Met
  • MPCDF"
  • Meteo
  • NASA
  • NASA
  • NCMRWF"
  • NOAA
  • NOAA
  • NOAA
  • NOAA
  • Purdue"
  • SciNet"
  • SSC"
  • SLAC"
  • UTAS"
Home    |    About HPSS    |    Services    |    Contact us
Copyright 1992 - 2021, HPSS Collaboration. All Rights Reserved.