Nswap2L Project

Nswap2L and Nswap2L-FS:   Transparent, Fast, Adaptable, Heterogeneous Backing Store

Nswap:   Network RAM for Linux Clusters

Project Overview Publications People
Nswap and Nswap2L are backing storage devices specifically designed for general purpose Linux clusters. Nswap is a Network RAM system that is designed to scale to large-sized clusters. It provides a block device interface to Network RAM that can be added as a swap device on individual cluster nodes. Nswap2L/Nswap2L-FS is a virtualization layer on top of a heterogeneous collection of cluster storage devices including Nswap Network RAM, Flash SSD, disk, or any other cluster-wide storage. It implements an single block device interface to the the Linux kernel and transparently implements data placement, migration and prefetching between underlying storage devices it manages. Nswap and Nswap2L can be added to individual cluster nodes as a swap partition or a partition for temporary file storage. We are currently investigating extensions and other uses for Nswap and Nswap2L cluster-wide backing store. Nswap is currently part of Nswap2L.

Nswap2L and Nswap2L-FS

Nswap2L is a virtualization layer on top of a heterogeneous collection of storage devices found in clusters, including Nswap Network RAM, disk, Flash SSD, and other local or network storage devices. Nswap2L implements a two-level device driver interface. At the top level, it appears to node operating systems (OSs) as a single, fast, random access device that can be added as a swap or local file partition on cluster nodes. It transparently manages an underlying set of heterogeneous storage devices to which swapped out data, or file system data, are stored. Internally, it implements data placement, migration, and prefetching policies that choose which underlying physical devices store data. Its policies incorporate information about device capacity, system load, and the strengths of different physical storage media. By moving device-specific knowledge into Nswap2L, VM policies and filesystem implementations in the OS can be based solely on typical application access patterns and not on characteristics of underlying physical storage media. Nswap2L's policy decisions are abstracted from the OS, freeing the OS from having to implement specialized policies for different combinations of cluster storage media. Nswap2L

For more information see:

  • Nswap2L-FS for backing filesystems: our IEEE HCW'17 paper
  • Nswap2L for backing swap: our ACM MemSys'16, and IEEE Cluster'11 papers

Nswap Adaptable Scalable Network RAM

Note: Nswap is now part of Nswap2L (see our MemSys'16 paper for more details).

Nswap is a Network RAM system for general purpose Linux clusters and networked systems. Cluster applications that process large amounts of data, such as parallel scientific or multimedia applications, are likely to cause disk swapping on individual cluster nodes. These applications will perform better on clusters with network RAM support. Network RAM allows any cluster node with over-committed memory to use the idle memory of remote nodes for its backing store and to "swap" its pages over the network. As the disparity between network speeds and disk speeds continues to grow, swapping pages over the network to store in the idle RAM of remote nodes will be increasingly faster than traditional swapping to local disk. Experimental results show Nswap significantly outperforms swapping to local disk or flash SSD.

Some key features of Nswap:

  • Scalable: Nswap uses a pure peer-to-peer design (vs. a centralized system) that scales to large clusters. Individual nodes independently choose to which remote node to swap out its pages, and each node independently manages its own cache of remotely swapped page data from other nodes.
  • Adaptable: A novel feature of Nswap is its adaptability to changes in a node's memory load; when a node needs more memory for its local processes, it acts as an Nswap client swapping its pages over the network, and when a node has idle RAM space it acts as an Nswap server caching other nodes' swapped pages. Nswap supports migration of remotely swapped pages between the servers storing them, and it dynamically grows and shrinks the size of each node's Nswap Cache (the amount of RAM currently allocated for storing remotely cached pages) in response to a node's local memory needs.
  • Transparent Nswap is implemented as a loadable kernel module that runs entirely in kernel space on an unmodified Linux kernel. When added as a swap (or file) partition on cluster nodes, it transparently and efficiently provides network RAM to cluster applications.


For more information see:
  • Nswap Network RAM: our EuroPar'03 paper
  • Reliable Network RAM: our IEEE Cluster'08 paper

Publications

Papers:

Posters:

  • "Nswap as a Fast and Adaptable Replacement Swap System"
    Doug Woos, Tia Newhall
    25th Annual Consortium for Computing Sciences in Colleges Eastern Conference, Student Poster Session, Villanova, PA, October 2009.
  • "Speeding up Computation with a Filesystem of Network RAM"
    Colin Schimmelfing, Tia Newhall
    25th Annual Consortium for Computing Sciences in Colleges Eastern Conference, Student Poster Session, Villanova, PA, October 2009.
  • "Reliability for Nswap"
    Jenny Barry, America Holloway, Heather Jones, Advisor: Tia Newhall
    Tenth Annual Consortium for Computing Sciences in Colleges Northeastern Conference, Student Poster Session , Providence, RI, April 2005.
  • "Reliability Algorithms for Network Swapping Systems with Page Migration"
    Benjamin Mitchell, Julian Rosse, Tia Newhall
    Poster Session of 2004 IEEE International Conference on Cluster Computing, September 2004.
  • "The Nswap Module for Network Swap"
    Sean Finney, Kuzman Ganchev, Matti Klock, Michael Spiegel, Advisor: Tia Newhall
    Eighth Annual Consortium for Computing Sciences in Colleges Northeastern Conference, Student Poster Session, Providence, RI, April 2003. (poster.pdf , abstract.pdf).

Other Documentation:

Project Members

Students past and present:

  Kei Imada'20
  Liam Packer'20
  Ryerson Lehman-Borer'16
  Ben Marks'16
  Alec Pillsbury'16
  Sam White'14
  Greg Taschuk'13
  Doug Woos'11
  Colin Schimmelfing'10
  Joel Tolliver'10
  Alexandr Pshenichkin'07
  Dan Amato'07
  Jenny Barry'07
  Heather Jones'06
  America Holloway'05
  Ben Mitchell'05
  Julian Rosse'04
  Sean Finney'03
  Kuzman Ganchev'03
  Michael Spiegel'03
  Matti Klock'03
  Gabriel Rosenkoetter'02
  Rafael Hinojosa'02

Tia Newhall Prof


Ryerson, Tia, Ben at Memsys'16

Kuzman and Sean at CCSCNE'03

Jenny and America at CCSCNE'05

Alex and Dan at Sigma Xi poster session'06