High Performance Storage System

HPSS Logo
Incremental Scalability
Based on storage needs and deployment schedules, HPSS scales incrementally by adding computer, network and storage resources. A single HPSS namespace can scale from petabytes of data to exabytes of data, from millions of files to billions of files, and from a few file-creates per second to thousands of file-creates per second.
That's Old News...

July 2019
Argonne Team Breaks Record for Globus Data Movement from the Summit supercomputer at Oak Ridge National Laboratory to HPSS tape.

HPSS 7.5.3 Release
HPSS 7.5.3 was released in December 2018 and introduces many new and exciting features.

HPSS Training
IBM Houston is looking at hosting an HPSS System Administration Course from October 1st through October 4th, 2019. Are you interested in attending? Learn more.

IBM TS1160
On November 20, 2018 IBM announced the new enterprise tape technology supporting 20 TB of native capacity and 400 MB/s of native bandwidth. Learn more.

Best of Breed for Tape Feature - Library efficiency
HPSS 7.5.2 and 7.5.3 improvements raise HPSS tape library efficiency to 99% on both IBM and Spectra Logic tape libraries.

HPSS Vendor Partnership Grows
HPSS begins Quantum Scalar i6000 tape library testing in 2018. Other HPSS tape vendor partners include IBM, Oracle, and Spectra Logic.

SwiftOnHPSS
Leverage OpenStack Swift to provide an object interface to data in HPSS. Directories of files and containers of objects can be accessed and shared across ALL interfaces with this OpenStack Swift Object Server implementation - Contact Us for more information, or Download Now.

Best of Breed for Tape Feature - RAIT
Throughout 2018 Oak Ridge National Laboratory cut redundant tape cost-estimates by 75% with 4+P HPSS RAIT (tape stripe with rotating parity) and enjoy large file tape transfers beyond 1 GB/s.

< Home

Come meet with us!
HPSS @ MSST 2020
The 35th International Conference on Massive Storage Systems and Technology will be in Santa Clara, California in May of 2020 - Learn More. Please contact us if you would like to meet with the IBM business and technical leaders of HPSS at Santa Clara University.

HPSS @ ISC20
The 2020 international conference for high performance computing, networking, and storage will be in Frankfurt, Germany from June 21st through 25th, 2020 - Learn More. Come visit the HPSS folks at the IBM booth and contact us if you would like to meet with the IBM business and technical leaders of HPSS in Frankfurt.

2020 HUF
The 2020 HPSS User Forum (HUF) is being hosted by Karlsruhe Institute of Technology (KIT) in Karlsruhe, Germany from September 7th through September 11th, 2020. This is a great place to meet HPSS users, collaboration developers and testers (from IBM and DOE Labs), support folks, and leadership. More details coming soon.

HPSS @ SC20
The 2020 international conference for high performance computing, networking, storage and analysis will be in Atlanta, Georgia from November 16th through 19th, 2020 - Learn More. Come visit the HPSS folks at the IBM booth and contact us if you would like to meet with the IBM business and technical leaders of HPSS in Atlanta.

What's New?
HPSS 8.2 Release - HPSS 8.2 was released on December 6th, 2019 and introduces a few new features.

New Globus DSI - Version 2.9 of the HPSS DSI is now available from the GitHub release page. It provides the capability to resume interrupted Globus transfers.

Lots Of Data - In November 2019 IBM/HPSS delivered a system to Shared Services Canada (SSC) for Environment Canada and demonstrated a sustained tape ingest rate of 11,574 MB/sec (1 PB/day peak tape ingest) while simultaneously demonstrating a sustained tape recall rate of 8,832 MB/sec (791 TB/day peak tape recall). HPSS pushed four 13-frame IBM TS4500 tape libraries (scheduled to house over 500 PB of tape media) to 2,168 mounts/hour.

HPSS 8.1 Release - HPSS 8.1 was released on October 1st, 2019 and introduces a few new features.

July 2019 - Argonne Team Breaks Record for Globus Data Movement from the Summit supercomputer at Oak Ridge National Laboratory to HPSS tape.

Capacity Leader - ECMWF (European Center for Medium-Range Weather Forecasts) has a single HPSS namespace with 451 PB spanning 312 million files.

File-Count Leader - LLNL (Lawrence Livermore National Laboratory) has a single HPSS namespace with 57 PB spanning 1.414 billion files.

Explosive data growth - HPSS Collaboration leadership from Lawrence Berkeley National Laboratory's National Energy Research Scientific Computing Center (NERSC) helped author the "NERSC Storage 2020" report, and NERSC trusts HPSS to meet their immediate and long term data storage challenges.

Older News - Want to read more?
Home    |    About HPSS    |    Services    |    Contact us
Copyright 1992 - 2020, HPSS Collaboration. All Rights Reserved.