High Performance Storage System

HPSS Logo
Incremental Scalability
Based on storage needs and deployment schedules, HPSS scales incrementally by adding computer, network and storage resources. A single HPSS namespace can scale from petabytes of data to exabytes of data, from millions of files to billions of files, and from a few file-creates per second to thousands of file-creates per second.
About HPSS   :    HPSS Releases   :    HPSS Version 8.1.0

The following is an overview of three new features that can be found in HPSS 8.1.0.

New to HPSS 8.1.0

Ordered Migration by Directory

HPSS now supports the ability to co-locate data on tape by directory. The objective is to improve tape recall efficiency. This new feature is especially pertinent when considering the use of the Full Aggregate Recall (FAR) feature released in HPSS 7.5.3.

If users organize files by directory and usually recall files from a given directory in bulk, by using Ordered Migration by Directory and Full Aggregate Recall users may see a 7x to 10x improvement in recall performance. Thus, HPSS 8.1 may be able to meet site tape-recall requirements using many fewer tape drives when compared to HPSS 7.5.3.

The migration policy will continue to honor the migration policy settings for when a file is eligible for migration, how often migration will run and the number of migration streams to use.

Prevent tape-recall events from HPSS FUSE

The HPSS FUSE mount point may now be configured to deny users from triggering HPSS tape recall events. If the nostagetape HPSS FUSE mount point option is specified, no data is staged from tape-only files. If the stagetape HPSS FUSE mount point option is specified or not included, users will trigger HPSS tape recall events for files that are not on the HPSS disk cache.

Most UNIX commands that come from FUSE are blocked, so HPSS receives one tape-recall at a time. When HPSS receives one tape-recall at a time, HPSS does not have an opportunity to organize and optimize the tape recall events. On the other hand, if a site monitors log messages, triggers can be put in place to process the error messages that are logged when the nostagetape HPSS FUSE mount point option is specified, and a user tries to recall a file on tape. When files are recalled in bulk, HPSS will organize and optimize the tape-recall events.

HPSS accounting information by UID and GID

HPSS will now track accounting information by UID and GID rather than by Account ID alone.

< Home

Come meet with us!
2021 HUF - VIRTUAL
COVID-19 has disrupted the 2021 HPSS User Forum (HUF) and the Karlsruhe Institute of Technology (KIT) in Karlsruhe, Germany is no longer hosting the event. The 2021 HUF will be hosted online for six days spread across three weeks in October 2021 with no admission cost. This will be a great opportunity to hear from HPSS users, collaboration developers, testers, support folks and leadership (from IBM and DOE Labs) - Learn More. Please contact us if you are not a customer but would like to attend.

HPSS @ SC21
The 2021 international conference for high performance computing, networking, storage and analysis will be in St. Louis, MO from November 15th through 18th, 2021 - Learn More. As we do each year, we are scheduling and meeting with customers via IBM Single Client Briefings. Please contact your local IBM client executive or contact us to schedule a HPSS Single Client Briefing to meet with the IBM business and technical leaders of HPSS.

HPSS @ STS 2022
The 4th Annual Storage Technology Showcase is in the planning stage, but HPSS expects to support the event in March of 2022. Check out their web site - Learn More. We expect an update in early fall 2021.

HPSS @ MSST 2022
The 37th International Conference on Massive Storage Systems and Technology will be in Santa Clara, California in May of 2022 - Learn More. Please contact us if you would like to meet with the IBM business and technical leaders of HPSS at Santa Clara University.

What's New?
HPSS 9.2 Release - HPSS 9.2 was released on May 11th, 2021 and introduces eight new features and numerous minor updates.

Atos Press Release - Atos boosts Météo-France’s data storage capacity to over 1 exabyte in 2025 to improve numerical modeling and climate predictions. Want to read more?

HUF 2020 - The HPSS User Forum was hosted virtually at no cost in October 2020.

HPSS 9.1 Release - HPSS 9.1 was released on September 24th, 2020 and introduces a few new features.

HPSS 8.3 Release - HPSS 8.3 was released on March 31st, 2020 and introduces one new feature and many minor changes.

HPSS 8.2 Release - HPSS 8.2 was released on December 6th, 2019 and introduces a few new features.

New Globus DSI - Version 2.9 of the HPSS DSI is now available from the GitHub release page. It provides the capability to resume interrupted Globus transfers.

Lots Of Data - In November 2019 IBM/HPSS delivered a system to a customer in Canada and demonstrated a sustained tape ingest rate of 11,574 MB/sec (1 PB/day peak tape ingest) while simultaneously demonstrating a sustained tape recall rate of 8,832 MB/sec (791 TB/day peak tape recall). HPSS pushed four 13-frame IBM TS4500 tape libraries (scheduled to house over 500 PB of tape media) to 2,168 mounts/hour.

HPSS 8.1 Release - HPSS 8.1 was released on October 1st, 2019 and introduces a few new features.

July 2019 - Argonne Team Breaks Record for Globus Data Movement from the Summit supercomputer at Oak Ridge National Laboratory to HPSS tape.

Capacity Leader - ECMWF (European Center for Medium-Range Weather Forecasts) has a single HPSS namespace with over 567 PB spanning over 399 million files.

File-Count Leader - LLNL (Lawrence Livermore National Laboratory) has a single HPSS namespace with over 65 PB spanning 1.540 billion files.

Explosive data growth - HPSS Collaboration leadership from Lawrence Berkeley National Laboratory's National Energy Research Scientific Computing Center (NERSC) helped author the "NERSC Storage 2020" report, and NERSC trusts HPSS to meet their immediate and long term data storage challenges.

Older News - Want to read more?
  • LLNL Logo
  • LLNL Logo
  • NERSC Logo
  • ORNL Logo
  • SANDIA Logo
  • IBM Logo
  • ANL Logo
  • BNL Logo
  • CEA Logo
  • DKRZ Logo
  • ECMWF Logo
  • HLRS Logo
  • IN2P3 Logo
  • IU Logo
  • JAXA Logo
  • KEK Logo
  • NASA LaRC Logo
  • NASA ASDC Logo
  • UCAR Logo
  • NOAA NCDC Logo
  • NOAA Logo
  • NCEP Logo
  • PNNL Logo
  • SLAC Logo
  • MetOffice Logo
  • SciNet Logo
  • SSC Logo
  • UTAS Logo
Home    |    About HPSS    |    Services    |    Contact us
Copyright 1992 - 2021, HPSS Collaboration. All Rights Reserved.