As architects continue to exploit hierarchical storage systems to scale critical data stores beyond a petabyte (1015 bytes) towards an exabyte (1000 petabytes), there is a equally critical need to deploy a high performance, reliable and scalable HSM.
Who developed HPSS?
HPSS is the result of over a decade of collaboration among five Department of Energy laboratories and IBM, with significant contributions by universities and other laboratories worldwide.
What do HPSS users store?
HPSS provides storage management for a diverse set of digital library, science, engineering and defense applications and safeguards a range of data including nanotechnology, genomics...
Who has a petabyte or more?
These HPSS Collaboration Members' sites have accumulated a petabyte or more of data, in a single HPSS file system. Some have passed fifteen petabytes, heading for twenty.
What is High Performance Storage System?
HPSS is software that manages petabytes of data on disk and robotic tape libraries. HPSS provides highly flexible and scalable hierarchical storage management that keeps recently used data on disk and less recently used data on tape. HPSS uses cluster, LAN and/or SAN technology to aggregate the capacity and performance of many computers, disks, and tape drives into a single virtual file system of exceptional size and versatility. This approach enables HPSS to easily meet otherwise unachievable demands of total storage capacity, file sizes, data rates, and number of objects stored. or
|2017 HUF - The 2017 HPSS User Forum will be hosted by the high energy accelerator research organization Kō Enerugī Kasokuki Kenkyū Kikō, known as KEK, in Tsukuba, Japan from October 16th through October 20th, 2017.|
|HPSS @ SC16 - SC16 is the 2016 international conference for high performance computing, networking, storage and analysis. SC16 will be in Salt Lake City, Utah from November 14th through 17th - Learn More. Come visit the HPSS folks at the IBM booth and schedule an HPSS briefing at the IBM Executive Briefing Center - Learn More|
|2016 HUF - The 2016 HPSS User Forum will be hosted by Brookhaven National Laboratory in New York City, New York from August 29th through September 2nd - For more information.|
|HPSS @ ISC16 - ISC16 is the 2016 International Supercomputing Conference for high performance computing, networking, storage and analysis. ISC16 will be in Frankfurt, Germany, from June 20th through 22nd - Learn More. Come visit the HPSS folks at the IBM booth and schedule an HPSS briefing at the IBM Executive Briefing Center - Learn More.|
|Swift On HPSS - Leverage OpenStack Swift to provide an object interface to data in HPSS. Directories of files and containers of objects can be accessed and shared across ALL interfaces with this OpenStack Swift Object Server implementation - Contact Us for mor information, or Download Now.|
|Capacity Leader - ECMWF (European Center for Medium-Range Weather Forecasts) has a single HPSS namespace with 216 PB spanning 257 million files.|
|File-Count Leader - LLNL (Lawrence Livermore National Laboratory) has a single HPSS namespace with 62 PB spanning 940 million files.|
|ORNL - Oak Ridge National Laboratory cut redundant tape cost-estimates by 75% with 4+P HPSS RAIT and enjoys large file tape transfers reaching 872 MB/s.|