High Performance Storage System

Incremental Scalability
Based on storage needs and deployment schedules, HPSS scales incrementally by adding computer, network and storage resources. A single HPSS namespace can scale from petabytes of data to exabytes of data, from millions of files to billions of files, and from a few file-creates per second to thousands of file-creates per second.
About HPSS   :    HPSS Releases   :    New to HPSS Version 9.2.0

Improved Spectra Logic Terapacks support

HPSS groups tape moves belonging to the same Spectra Logic Terapack for improved mount and dismount performance.

Spectra Logic TAOS support

HPSS now supports the Spectra Logic Time-based Access Order System (TAOS) to improve the file-to-file seek time when recalling multiple files from an LTO tape cartridge. Details of the improved recall performance are found at the bottom of page 11 - up to a 4 times decrease in overall time needed to retrieve all of the files compared to random ordered recalls.

Increased the maximum individual disk size limit

The HPSS maximum individual disk size limit has been increased from 256 TB to 1 EiB.

Improved adler32 hash performance

HPSS is now compiled with zlib to support hardware acceleration for improved adler32 hash performance.

Support LTO-9

HPSS supports LTO-9 drives and media. The RAO (recommended access order) feature that is available on LTO-9 is also supported by HPSS.

Support for very large log messages

HPSS can potentially generate very large messages that can be truncated. Splitting log messages into multiple smaller messages prevents truncation. Split messages are logged with a “chunk number” so they can be stitched back together using the hpss_log_formatter tool.

Improved automatic small file tape aggregation performance

A tape drive’s native transfer rate can only be achieved when the tape drive is performing IO for 15 seconds or more. A 250 MB/s tape drive must do 4 GB of IO to achieve this native transfer rate. HPSS automatic small file aggregation is now based on the amount of data written to tape. Before HPSS 9.2 automatic small file aggregation was based on the number of files put into the aggregate, and that was capped at 1,000 files. The HPSS maximum number of small files that are automatically put into a tape aggregate has been increased from 1,000 files to 50,000 files. To further improve automatic small file aggregation performance, HPSS has improved end-of-media (EOM) handling when writing small file tape aggregates. A further benefit of this improved EOM handling is an increase in tape cartridge space utilization.

Improved small file tape aggregate support with HPSS repack tool

HPSS repack tool also had the same 1,000 file limit before HPSS 9.2. Similar small file tape aggregation improvements (as stated above) were made to the HPSS repack tool.

User Defined Attributes (UDA) support via PFTP

PFTP now supports getting and setting UDAs.

< Home

Come meet with us!
2023 HUF
The 2023 HPSS User Forum (HUF) will be an in-person event scheduled October 30th through November 3rd, 2023, in Herndon, VA. This will be a great opportunity to hear from HPSS users, collaboration developers, testers, support folks and leadership (from IBM and DOE Labs). Would you like to Learn More? Please contact us if you are not a customer but would like to attend.

The 2023 international conference for high performance computing, networking, storage and analysis will be in Denver, CO from November 12th through 17th, 2023 - Learn More. As we have each year (pre-pandemic), we are scheduling and meeting with customers via IBM Single Client Briefings. Please contact your local IBM client executive or contact us to schedule a HPSS Single Client Briefing to meet with the IBM business and technical leaders of HPSS.

HPSS @ STS 2024
The 5th Annual Storage Technology Showcase is in the planning phase, but HPSS expects to support the event. Check out their web site - Learn More.

HPSS @ MSST 2024
The 38th International Conference on Massive Storage Systems and Technology will be in Santa Clara, California in May of 2024 - Learn More. Please contact us if you would like to meet with the IBM business and technical leaders of HPSS at Santa Clara University.

HPSS @ ISC 2024
ISC 2024 is the event for high performance computing, machine learning, and data analytics, and will be in Hamburg, Germany at the Congress Center Hamburg, from May 12th through May 16th, 2024 - Learn More. As we have done each year (pre-pandemic), we are scheduling and meeting with folks attending the conference. Please contact us meet with the IBM business and technical leaders of HPSS.

What's New?
HPSS 10.2 Release - HPSS 10.2 was released on February 16th, 2023 and introduces six new features and numerous minor updates.

HUF 2022 - The HPSS User Forum was hosted by IBM Houston in October 2021, at their IBM Houston Kurland building.

Celebrating 30 Years - Fall 2022 marks the 30th anniversary of the High Performance Storage System (HPSS) Collaboration.

HPSS 10.1 Release - HPSS 10.1 was released on September 30th, 2022 and introduces fourteen new features and numerous minor updates.

Lots of Data - In March 2022, IBM/HPSS delivered a storage solution to a customer in Canada, and demonstrated a sustained tape ingest rate of 33 GB/sec (2.86 PB/day peak tape ingest x 2 for dual copy), while simultaneously demonstrating a sustained tape recall rate of 24 GB/sec (2.0 PB/day peak tape recall). HPSS pushed six 18-frame IBM TS4500 tape libraries (scheduled to house over 1.6 Exabytes of tape media) to over 3,000 mounts/hour.

DOE Announces HPSS Milestone - Todd Heer, Deputy Program Lead, Advanced Simulation and Computing (ASC) Facilities, Operations, and User Support (FOUS), announced that DOE High Performance Storage Systems (HPSS) eclipse one exabyte in stored data.

Atos Press Release - Atos boosts Météo-France’s data storage capacity to over 1 exabyte in 2025 to improve numerical modeling and climate predictions. Want to read more?

Capacity Leader - ECMWF (European Center for Medium-Range Weather Forecasts) has a single HPSS namespace with over 824 PB spanning over 556 million files.

File-Count Leader - LLNL (Lawrence Livermore National Laboratory) has a single HPSS namespace with over 78 PB spanning 1.746 billion files.

Older News - Want to read more?
  • LLNL
  • LANL
  • ORNL
  • Sandia
  • IBM
  • ANL
  • Boeing
  • BNL
  • CEA
  • CNES
  • DWD
  • HLRS
  • IU
  • IITM
  • IN2P3
  • JAXA
  • KEK
  • KIT
  • Met Office
  • Meteo France
  • Nasjonalbiblioteket
  • NOAA R&D
  • Purdue
  • SciNet
  • SSC
  • SLAC
  • UTAS
Home    |    About HPSS    |    Services    |    Contact us
Copyright 1992 - 2021, HPSS Collaboration. All Rights Reserved.