In order to build, install, and run Hyperledger Avalon a number of additional components must be installed and configured. The following instructions will guide you through the installation and build process for Hyperledger Avalon.
If you have not done so already, clone the Avalon source repository. Choose whether you want the stable version (recommended) or the most recent version
-
To use the current stable branch (recommended), run this command:
git clone https://github.com/hyperledger/avalon --branch pre-release-v0.5
-
Or, to use the latest branch, run this command:
git clone https://github.com/hyperledger/avalon
You have a choice of Docker-based build or a Standalone-based build. We recommend the Docker-based build since it is automated and requires fewer steps.
Follow the instructions below to execute a Docker-based build and execution.
-
Install Docker Engine and Docker Compose, if not already installed. See PREREQUISITES for instructions
-
Build and run the Docker image from the top-level directory of your
avalon
source repository.Intel SGX Simulator mode (for hosts without Intel SGX):
- To run in Singleton mode (the same worker handles both keys and workloads):
To start a worker pool (with one Key Management Enclave and one Work order Processing Enclave):
sudo docker-compose up --build
sudo docker-compose -f docker-compose.yaml -f docker-compose-pool.yaml up --build
- For subsequent runs on the same workspace, if you changed a source or configuration file, run the above command again
- For subsequent runs on the same workspace, if you did not make any
changes, startup and build time can be reduced by running:
For worker pool, run:
MAKECLEAN=0 sudo -E docker-compose up
MAKECLEAN=0 sudo docker-compose -f docker-compose.yaml -f docker-compose-pool.yaml up
SGX Hardware mode (for hosts with Intel SGX):
- Refer to Intel SGX in Hardware-mode section in PREREQUISITES document to install Intel SGX pre-requisites and to configure IAS keys.
- Run:
For worker pool, run:
sudo docker-compose -f docker-compose.yaml -f docker-compose-sgx.yaml up --build
sudo docker-compose -f docker-compose.yaml -f docker-compose-pool.yaml \ -f docker-compose-pool-sgx.yaml up --build
- For subsequent runs on the same workspace, if you changed a source or configuration file, run the above command again
- For subsequent runs on the same workspace, if you did not make any
changes, startup and build time can be reduced by running:
For worker pool, run:
MAKECLEAN=0 sudo -E docker-compose -f docker-compose.yaml -f docker-compose-sgx.yaml up
MAKECLEAN=0 sudo docker-compose -f docker-compose.yaml -f docker-compose-pool.yaml \ -f docker-compose-pool-sgx.yaml up
- To run in Singleton mode (the same worker handles both keys and workloads):
-
On a successful run, you should see the message
BUILD SUCCESS
followed by a repetitive messageEnclave manager sleeping for 10 secs
-
Open a Docker container shell using following command
sudo docker exec -it avalon-shell bash
-
To execute test cases refer to Testing section below
-
To exit the Avalon program, press
Ctrl-c
Follow the PREREQUISITES document to install and configure components on which Hyperledger Avalon depends.
This section describes how to get started with Avalon quickly using provided scripts to compile and install Avalon. The steps below will set up a Python virtual environment to run Avalon.
-
Make sure environment variables are set as described in the PREREQUISITES document
-
Change to your Avalon source repository cloned above:
cd avalon
-
Set
TCF_HOME
to the top level directory of youravalon
source repository. You will need these environment variables set in every shell session where you interact with Avalon. Append this line (withpwd
expanded) to your login shell script (~/.bashrc
or similar):export TCF_HOME=`pwd` echo "export TCF_HOME=$TCF_HOME" >> ~/.bashrc
-
If you are using Intel SGX hardware, check that
SGX_MODE=HW
before building the code. If you are not using Intel SGX hardware, check thatSGX_MODE
is not set or set toSGX_MODE=SIM
. By defaultSGX_MODE=SIM
, indicating use the Intel SGX simulator. -
If you are not using Intel SGX hardware, go to the next step. Check that
TCF_ENCLAVE_CODE_SIGN_PEM
is set. Refer to the PREREQUISITES document for more details on these variables.You will also need to obtain an Intel IAS subscription key and SPID from the portal https://api.portal.trustedservices.intel.com/ Replace the SPID and IAS Subscription key values in file
$TCF_HOME/config/singleton_enclave_config.toml
with the actual hexadecimal values (the IAS key may be either your Primary key or Secondary key):spid = '<spid obtained from portal>' ias_api_key = '<ias subscription key obtained from portal>'
-
If you are not behind a corporate proxy (the usual case), then skip this step and go to the next step.
If you are behind a corporate proxy, then in file
$TCF_HOME/config/tcs_config.toml
uncomment and update thehttps_proxy
line:#https_proxy = "http://your-proxy:your-port/"
If you are behind a proxy and also using Intel SGX hardware (
SGX_MODE=HW
), add the following to your/etc/aesmd.conf
file and update theaesm proxy
line:proxy type = manual aesm proxy = http://your-proxy:your-port/
-
Create a Python virtual environment:
cd $TCF_HOME/tools/build python3 -m venv _dev
-
Activate the new Python virtual environment for the current shell session. You will need to do this in each new shell session (in addition to exporting environment variables).
source _dev/bin/activate
If the virtual environment for the current shell session is activated, you will the see this prompt:
(_dev)
-
Install PIP3 packages into your Python virtual environment:
pip3 install --upgrade setuptools json-rpc py-solc-x web3 colorlog twisted wheel toml
-
Build Avalon components:
make clean make
Once the code is successfully built, run the test suite to check that the
installation is working correctly.
Follow these steps to run the Demo.py
testcase:
NOTE: Skip step 1 in the case of Docker-based builds, since
docker-compose.yaml
will run the TCS startup script.
- For standalone builds only:
- Open a new terminal, Terminal 1
cd $TCF_HOME/scripts
- Run
source $TCF_HOME/tools/build/_dev/bin/activate
. You should see the(_dev)
prompt - Run
./tcs_startup.sh -s
The-s
option startskv_storage
before other Avalon components. - Wait for the listener to start. You should see the message
TCS Listener started on port 1947
, followed by a repetitive messageEnclave manager sleeping for 10 secs
- To run the Demo test case, open a new terminal, Terminal 2
- In Terminal 2, run
source $TCF_HOME/tools/build/_dev/bin/activate
. You should see the(_dev)
prompt - In Terminal 2, cd to
$TCF_HOME/tests
and type this command to run theDemo.py
test:cd $TCF_HOME/tests python3 Demo.py --input_dir ./json_requests/ \ --connect_uri "http://localhost:1947" work_orders/output.json
- For Docker-based builds:
- Follow the steps above for "Docker-based Build and Execution"
- Terminal 1 is running
docker-compose
and Terminal 2 is running the "avalon-shell" Docker container shell from the previous build steps - In Terminal 2, cd to
$TCF_HOME/tests
and type this command to run theDemo.py
test:cd $TCF_HOME/tests python3 Demo.py --input_dir ./json_requests/ \ --connect_uri "http://avalon-listener:1947" work_orders/output.json
- The response to the Avalon listener and Intel® SGX Enclave Manager can be seen at Terminal 1
- The response to the test case request can be seen at Terminal 2
- If you wish to exit the Avalon program, press
Ctrl-c
A GUI is also available to run this demo. See examples/apps/heart_disease_eval
To run lint checks on codebase, execute the following commands -
cd $TCF_HOME
docker-compose -f docker-compose-lint.yaml up
The steps above runs lint on all modules by default.
If you want to run lint on selective modules, you need to pass the modules via
LINT_MODULES
. For example:
cd $TCF_HOME
LINT_MODULES={sdk,common} docker-compose -f docker-compose-lint.yaml up
Module names can be found here in the codebase.
-
If you see the message
ModuleNotFoundError: No module named '...'
, you did not runsource _dev/bin/activate
or you did not successfully build Avalon -
If you see the message
CMake Error: The current CMakeCache.txt . . . is different than the directory . . . where CMakeCache.txt was created.
then the CMakeCache.txt file is out-of-date. Remove the file and rebuild.
-
Verify your environment variables are set correctly and the paths exist
-
If the Demo test code breaks due to some error, please perform the following steps before re-running:
sudo rm $TCF_HOME/config/Kv*
$TCF_HOME/scripts/tcs_startup.sh -t -s
- You can re-run the test now
-
If you get build errors rerunning
make
, trysudo make clean
first -
If you see the message
No package 'openssl' found
, you do not have OpenSSL libraries or the correct version of OpenSSL libraries. See PREREQUISITES for installation instructions -
If you see the message
ImportError: ...: cannot open shared object file: No such file or directory
, then you need to setLD_LIBRARY_PATH
with:source /opt/intel/sgxsdk/environment
. For details, see PREREQUISITES