λFS is an elastic, scalable, and high-performance metadata service for large-scale distributed file systems (DFSes). λFS scales a DFS metadata cache elastically on a FaaS (Function-as-a-Service) platform and synthesizes a series of techniques to overcome the obstacles that are encountered when building large, stateful, and performance-sensitive applications on FaaS platforms. λFS takes full advantage of the unique benefits offered by FaaS--elastic scaling and massive parallelism--to realize a highly-optimized metadata service capable of sustaining up to multi-fold higher throughput than a state-of-the-art DFS.
λFS is implemented as a fork of HopsFS 3.2.0.3. Currently we support the OpenWhisk serverless platform, though we've successfully used Nuclio as well. Serverless functions enable better scalability and cost-effectiveness as well as ease-of-use.
NOTE: For the most accurate, detailed, and up-to-date instructions, please see the setup.md
file available here or its "TL;DR" (i.e., shortened) version available here.
- λFS Benchmarking Software: ds2/LambdaFS-Benchmarking
- λFS Kubernetes Deployment Repository: ds2/openwhisk-deploy-kube
- λFS Java Serverless Function Runtime: ds2/openwhisk-runtime-java
The exact software used to build λFS is as follows:
- OpenJDK "1.8.0_382"
- OpenJDK Runtime Environment (build 1.8.0_382-8u382-ga-1~22.04.1-b05)
- OpenJDK 64-Bit Server VM (build 25.382-b05, mixed mode)
- Apache Maven 3.6.3
- Google Protocol Buffer Version 2.5
- MySQL Cluster NDB native client library
- Kubernetes
- kubectl
- Client Version: v1.26.1
- Kustomize Version: v4.5.7
- Server Version: v1.24.16-eks-2d98532
- Helm v3.11.0
- Git commit:
472c5736ab01133de504a826bd9ee12cbe4e7904
- Go version: 1.18.10
- Git commit:
- Docker
- Docker Engine Client - Community
- Version: 20.10.23
- API version: 1.41
- Go version: 1.18.10
- Git commit: 7155243
- Docker Engine Server - Community
- Version: 20.10.23
- API version: 1.41 (minimum version 1.12)
- Go version: 1.18.10
- containerd:
- Version: 1.6.15
- runc:
- Version: 1.1.4
- docker-init:
- Version: 0.19.0
- Docker Engine Client - Community
The operating system used during development and testing was Ubuntu 22.04.1 LTS.
We use Apache and GPL licensed code from HopsFS and MySQL Cluster. We make use of the same DAL API (similar to JDBC) provided by HopsFS (although it has been extended to support the necessary function required by λFS). We dynamically link the DAL implementation for MySQL Cluster NDB with the λFS code.
Perform the following steps in the following order to compile λFS or the associated HopsFS fork (found in the branch 3.2.0.2-caching
):
The following steps can be performed to create a virtual machine capable of building and driving λFS. These steps have been tested using a fresh virtual machine on both Google Cloud Platform (GCP), IBM Cloud Platform (IBM Cloud), and most recently Amazon Web Services (AWS). On GCP, the virtual machine was running Ubuntu 18.04.5 LTS (Bionic Beaver). On IBM Cloud, the virtual machine was running Ubuntu 18.04.6 LTS (Bionic Beaver). On AWS, the virtual machine was running Ubuntu 22.04.1 LTS. As such, Ubuntu 22.04.1 LTS is the recommended operating system for running λFS. There may be issues if using any other OS, including the versions of Ubuntu used on GCP and IBM Cloud.
Execute the following commands to install JDK 8. These are the commands we executed when developing λFS.
sudo apt-get purge openjdk*
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt install openjdk-8-jre-headless
sudo apt-get install openjdk-8-jdk
See this AskUbuntu thread for details on why these commands are used.
The exact versions of the JRE and JDK that we used are:
$ java -version
openjdk version "1.8.0_292"
OpenJDK Runtime Environment (build 1.8.0_292-8u292-b10-0ubuntu1~18.04-b10)
OpenJDK 64-Bit Server VM (build 25.292-b10, mixed mode)
Now you need to set your JAVA_HOME
environment variable. You can do this for your current session via export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/
(though you should verify this is the directory that Java has been installed to). To make this permanent, add that command to the bottom of your ~/.bashrc
file.
sudo apt-get -y install maven
sudo apt-get -y install build-essential autoconf automake libtool cmake zlib1g-dev pkg-config libssl-dev libsasl2-dev
cd /usr/local/src/
sudo wget https://github.com/google/protobuf/releases/download/v2.5.0/protobuf-2.5.0.tar.gz
sudo tar xvf protobuf-2.5.0.tar.gz
cd protobuf-2.5.0
sudo ./autogen.sh
sudo ./configure --prefix=/usr
sudo make
sudo make install
protoc --version
See this StackOverflow post for details on why these commands are used.
Bats is used when building hadoop-common-project
, more specifically the hadoop-common
module. It can be installed with sudo apt-get install bats
.
The following libraries are all optional. We installed them when developing λFS.
-
Snappy Compression:
sudo apt-get install snappy libsnappy-dev
-
Bzip2:
sudo apt-get install bzip2 libbz2-dev
-
Jansson (C library for JSON):
sudo apt-get install libjansson-dev
-
Linux FUSE:
sudo apt-get install fuse libfuse-dev
-
ZStandard compression:
sudo apt-get install zstd
The hops-metadata-dal
and hops-metadata-dal-impl-ndb
layers are included in this base project. They should not be retrieved separately. The custom libndbclient.so
file can/should be retrieved separately, however.
git clone https://github.com/hopshadoop/hops
git checkout serverless-namenode-aws
There are two way to configure the NDB data access layer driver
-
Hard Coding The Database Configuration Parameters: While compiling the database access layer all the required configuration parameters can be written to the
./hops-metadata-dal-impl-ndb/src/main/resources/ndb-config.properties
file. When the diver is loaded it will try to connect to the database specified in the configuration file. -
The
hdfs-site.xml
configuration file: Adddfs.storage.driver.configfile
parameter tohdfs-site.xml
to read the configuration file from a sepcified path. For example, to read the configuration file in the current directory add the following thehdfs-site.xml
<property>
<name>dfs.storage.driver.configfile</name>
<value>hops-ndb-config.properties</value>
</property>
The aws-setup/create_aws_infrastructure.py
script can be used to automatically create the MySQL NDB cluster. If you wish to create the cluster manually, then we recommend following the official documented (located here) to install and create your MySQL NDB cluster. Once your cluster is up and running, you can move onto creating the necessary database tables to run λFS. We used the "Generic Linux" version of MySQL Cluster v8.0.26.
There are several pre-written .sql
files and associated scripts in the hops-metadata-dal-impl-ndb
project. These files automate the process of creating the necessary database tables. Simply navigate to the /hops-metadata-dal-impl-ndb/schema/
directory and execute the create-tables.sh
script. This script takes several arguments. The first is the hostname (IP address) of the MySQL server. We recommend running this script from the VM hosting the MySQL instance, in which case the first parameter will be localhost
. The second parameter is the port, which is 3306
by default for MySQL. The third and forth parameters are the username and password of a MySQL user with which the MySQL commands will be executed.
You can create a root user for use with the NameNodes and this creation process as follows. First, connect to your MySQL server mysql -u root
. This assumes you are connecting from the VM hosting the server and you installed MySQL cluster using the --initialize-insecure
flag as described here (step 3).
CREATE USER user@'%' IDENTIFIED BY '<password>';
GRANT ALL PRIVILEGES ON *.* TO 'user'@'%';
FLUSH PRIVILEGES;
Note that this is highly insecure and is not recommended for production environments. Once the user is created, you can pass the username "user" and whatever password you used to the create-tables.sh
script.
Finally, the last parameter to the script is the database. You can create a new database by connecting to the MySQL server and executing CREATE DATABASE <database_name>
. Make sure you update the HopsFS configuration files (i.e., the hops-ndb-config.properties
described in the previous section) to reflect the new user and database.
Thus, to run the script, you would execute: ./create-tables.sh localhost 3306 <username> <password> <database_name>
. After this, you should also create the additional tables required by λFS. These are written out in the serverless.sql
file. Simply execute the following command to do this: mysql --host=localhost --port=3306 -u <username> -p<password> <database_name> < serverless.sql
. Notice how there is no space between the -p
(password) flag and the password itself.
-
If at some point, you get an error that the
.pom
or.jar
forhadoop-maven-plugins
could not be found, go to the/hadoop-maven-plugins
directory and executemvn clean install
to ensure that it gets built and is available in your local maven repository. -
If you get dependency convergence errors from
maven-enforcer-plugin
about thehops-metadata-dal
andhops-metadata-dal-impl-ndb
projects, then you may be able to resolve them by building these two projects individually/one-at-a-time. Start withhops-metadata-dal
and build it withmvn clean install
(go to the root directory for that module and execute that command). Then do the same forhops-metadata-dal-impl-ndb
.
We provide a separate repoistory in which we've pre-configured an OpenWhisk deployment specifically for use with λFS. This repository is available here.
Simply clone the repository and navigate to the openwhisk-deploy-kube/helm/openwhisk
directory. Then execute the following command:
helm install owdev -f values.yaml .
This will install the OpenWhisk deployment on your Kubernetes cluster. Once everything is up-and-running, set the apiHostName
property according to the Helm output. (When you execute the above command, Helm will tell you what to set the apiHostName
property to.)
You can modify the deployment by changing the configuration parameters in the openwhisk-deploy-kube/helm/openwhisk/values.yaml
file. After modifying the values, execute the following command (from the openwhisk-deploy-kube/helm/openwhisk
directory):
helm upgrade owdev -f values.yaml .
We provide a separate repository containing a pre-configured OpenWhisk Docker image. This repository is available here.
Simply clone the repository. From the top-level directory, execute ./gradlew core:java8:distDocker
to build the image.
This distribution includes cryptographic software. The country in which you currently reside may have restrictions on the import, possession, use, and/or re-export to another country, of encryption software. BEFORE using any encryption software, please check your country's laws, regulations and policies concerning the import, possession, or use, and re-export of encryption software, to see if this is permitted. See http://www.wassenaar.org/ for more information.
The U.S. Government Department of Commerce, Bureau of Industry and Security (BIS), has classified this software as Export Commodity Control Number (ECCN) 5D002.C.1, which includes information security software using or performing cryptographic functions with asymmetric algorithms. The form and manner of this Apache Software Foundation distribution makes it eligible for export under the License Exception ENC Technology Software Unrestricted (TSU) exception (see the BIS Export Administration Regulations, Section 740.13) for both object code and source code.
The following provides more details on the included cryptographic software: Hadoop Core uses the SSL libraries from the Jetty project written by mortbay.org.
This project uses the YourKit Java profiler.
YourKit supports open source projects with innovative and intelligent tools for monitoring and profiling Java and .NET applications. YourKit is the creator of YourKit Java Profiler, YourKit .NET Profiler, and YourKit YouMonitor.
Hops is released under an Apache 2.0 license.
This software was the subject of the paper, λFS: A Scalable and Elastic Distributed File System Metadata Service using Serverless Functions.
This paper can be found here on arXiv and here in the proceedings of ASPLOS'23. The software found here was used to evaluate λFS and HopsFS for the paper.
BibTeX Citation):
@proceedings{asplos23_lambdafs,
author = {Carver, Benjamin and Han, Runzhou and Zhang, Jingyuan and Zheng, Mai and Cheng, Yue},
title = {λFS: A Scalable and Elastic Distributed File System Metadata Service using Serverless Functions},
year = {2024},
isbn = {9798400703942},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3623278.3624765},
doi = {10.1145/3623278.3624765},
abstract = {The metadata service (MDS) sits on the critical path for distributed file system (DFS) operations, and therefore it is key to the overall performance of a large-scale DFS. Common "serverful" MDS architectures, such as a single server or cluster of servers, have a significant shortcoming: either they are not scalable, or they make it difficult to achieve an optimal balance of performance, resource utilization, and cost. A modern MDS requires a novel architecture that addresses this shortcoming.To this end, we design and implement γFS, an elastic, high-performance metadata service for large-scale DFSes. γFS scales a DFS metadata cache elastically on a FaaS (Function-as-a-Service) platform and synthesizes a series of techniques to overcome the obstacles that are encountered when building large, stateful, and performance-sensitive applications on FaaS platforms. γFS takes full advantage of the unique benefits offered by FaaS---elastic scaling and massive parallelism---to realize a highly-optimized metadata service capable of sustaining up to 4.13X higher throughput, 90.40\% lower latency, 85.99\% lower cost, 3.33X better performance-per-cost, and better resource utilization and efficiency than a state-of-the-art DFS for an industrial workload.},
booktitle = {Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 4},
pages = {394–411},
numpages = {18},
location = {, Vancouver, BC, Canada, },
series = {ASPLOS '23}
}
This citation will be updated once the paper is officially published in the proceedings of ASPLOS'23.