High Performance Storage System

Incremental Scalability
Based on storage needs and deployment schedules, HPSS scales incrementally by adding computer, network and storage resources. A single HPSS namespace can scale from petabytes of data to exabytes of data, from millions of files to billions of files, and from a few file-creates per second to thousands of file-creates per second.
About HPSS   :    HPSS Releases   :    New to HPSS Version 10.2.0

Purge on migrate flag

HPSS now supports setting a flag on a bitfile which informs HPSS that the file should not be kept on disk after migration. This can be used for files which an application or user knows will not be needed again soon. When a file is written with this flag set, the file will immediately be purged off disk after it has been migrated to tape. When a file is staged with this flag enabled, it will behave normally, according to the purge policy. The purge on migrate flag can be set using the HPSS C API, the HPSS Python client, and scrub. See the Programmer's Reference for more information.

Low overhead read interface (LORI)

lori is a new tool which behaves similarly to quaid and shares many of the same options including organizing requests by tape, callouts upon read completion, filtering, and more. However, while quaid stages files up to the top level of the hierarchy, lori uses PIO to read data out of HPSS with minimal overhead and into the selected output directory. See the lori man page for more details.

In the future, lori will take advantage of upcoming changes to enable mass recall with read recovery.

Discover new control paths in the SCSI PVR during runtime

The SCSI PVR can now re-scan the SCSI Media Changer devices it has access to looking for new control paths to the tape library. This is triggered by doing a “Reinit” of the SCSI PVR server. Note that it will detect new devices, but will not remove old devices unless they are not in use by the SCSI PVR. Old devices which may have been in use but no longer exist will be marked as DOWN and will not be used by the SCSI PVR.

Optional ASAN-enabled versions of HPSS Servers

RPMs are now provided for ASAN-enabled versions of HPSS servers. These special binaries are otherwise the same as standard binaries, but have support compiled in to improve memory issue detection. These binaries can be used to spot certain kinds of memory use errors that generate crashes or memory leaks. The information from these binaries can reduce speed to resolution for these defect categories. One major advantage over prior tools is that an ASAN-enabled server may be modestly slower (20-30%) than the typical server, whereas prior tools could be multiple times slower.

Disable Direct IO

Historically, the HPSS Mover only performed direct IO to disk devices. The HPSS administrator may disbale direct IO for faster disk transfer performance. Use HPSS end-to-end data integrity to identify silent data corruption.

Simplify AWS tape storage gateway support

Deliver script to create hpss-compatible tape volumes in AWS tape storage gateweays.

< Home

Come meet with us!
2023 HUF
The 2023 HPSS User Forum (HUF) will be an in-person event scheduled October 30th through November 3rd, 2023, in Herndon, VA. This will be a great opportunity to hear from HPSS users, collaboration developers, testers, support folks and leadership (from IBM and DOE Labs). Would you like to Learn More? Please contact us if you are not a customer but would like to attend.

The 2023 international conference for high performance computing, networking, storage and analysis will be in Denver, CO from November 12th through 17th, 2023 - Learn More. As we have each year (pre-pandemic), we are scheduling and meeting with customers via IBM Single Client Briefings. Please contact your local IBM client executive or contact us to schedule a HPSS Single Client Briefing to meet with the IBM business and technical leaders of HPSS.

HPSS @ STS 2024
The 5th Annual Storage Technology Showcase is in the planning phase, but HPSS expects to support the event. Check out their web site - Learn More.

HPSS @ MSST 2024
The 38th International Conference on Massive Storage Systems and Technology will be in Santa Clara, California in May of 2024 - Learn More. Please contact us if you would like to meet with the IBM business and technical leaders of HPSS at Santa Clara University.

HPSS @ ISC 2024
ISC 2024 is the event for high performance computing, machine learning, and data analytics, and will be in Hamburg, Germany at the Congress Center Hamburg, from May 12th through May 16th, 2024 - Learn More. As we have done each year (pre-pandemic), we are scheduling and meeting with folks attending the conference. Please contact us meet with the IBM business and technical leaders of HPSS.

What's New?
HPSS 10.2 Release - HPSS 10.2 was released on February 16th, 2023 and introduces six new features and numerous minor updates.

HUF 2022 - The HPSS User Forum was hosted by IBM Houston in October 2021, at their IBM Houston Kurland building.

Celebrating 30 Years - Fall 2022 marks the 30th anniversary of the High Performance Storage System (HPSS) Collaboration.

HPSS 10.1 Release - HPSS 10.1 was released on September 30th, 2022 and introduces fourteen new features and numerous minor updates.

Lots of Data - In March 2022, IBM/HPSS delivered a storage solution to a customer in Canada, and demonstrated a sustained tape ingest rate of 33 GB/sec (2.86 PB/day peak tape ingest x 2 for dual copy), while simultaneously demonstrating a sustained tape recall rate of 24 GB/sec (2.0 PB/day peak tape recall). HPSS pushed six 18-frame IBM TS4500 tape libraries (scheduled to house over 1.6 Exabytes of tape media) to over 3,000 mounts/hour.

DOE Announces HPSS Milestone - Todd Heer, Deputy Program Lead, Advanced Simulation and Computing (ASC) Facilities, Operations, and User Support (FOUS), announced that DOE High Performance Storage Systems (HPSS) eclipse one exabyte in stored data.

Atos Press Release - Atos boosts Météo-France’s data storage capacity to over 1 exabyte in 2025 to improve numerical modeling and climate predictions. Want to read more?

Capacity Leader - ECMWF (European Center for Medium-Range Weather Forecasts) has a single HPSS namespace with over 824 PB spanning over 556 million files.

File-Count Leader - LLNL (Lawrence Livermore National Laboratory) has a single HPSS namespace with over 78 PB spanning 1.746 billion files.

Older News - Want to read more?
  • LLNL
  • LANL
  • ORNL
  • Sandia
  • IBM
  • ANL
  • Boeing
  • BNL
  • CEA
  • CNES
  • DWD
  • HLRS
  • IU
  • IITM
  • IN2P3
  • JAXA
  • KEK
  • KIT
  • Met Office
  • Meteo France
  • Nasjonalbiblioteket
  • NOAA R&D
  • Purdue
  • SciNet
  • SSC
  • SLAC
  • UTAS
Home    |    About HPSS    |    Services    |    Contact us
Copyright 1992 - 2021, HPSS Collaboration. All Rights Reserved.