diff --git a/README.md b/README.md
index 2e3d978..d58e7bb 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,6 @@
# Artifact for TAPDance - NDSS '24
-This repository contains the code to reproduce the experiments for the paper: `Architecting Trigger-Action Platforms for Security, Performance and Functionality`. The paper's results were obtained by running on a [StarFive VisonFive SBC](https://doc-en.rvspace.org/Doc_Center/visionfive.html) and this repository provides instructions to run the benchmarks on real hardware as well as in an emulated Qemu environment. For conducting the experiments using Qemu, 3 networked machines running Ubuntu 18.04 LTS is required. Building the Docker container needed to run Qemu needs a machine with atleast 30 GB of free space for the docker cache.
+This repository contains the code to reproduce the experiments for the paper: [Architecting Trigger-Action Platforms for Security, Performance and Functionality](https://pages.cs.wisc.edu/~dsirone/papers/tapdance_ndss.pdf). The paper's results were obtained by running on a [StarFive VisonFive SBC](https://doc-en.rvspace.org/Doc_Center/visionfive.html) and this repository provides instructions to run the benchmarks on real hardware as well as in an emulated Qemu environment. For conducting the experiments using Qemu, 3 networked machines running Ubuntu 18.04 LTS is required. Building the Docker container needed to run Qemu needs a machine with atleast 30 GB of free space for the docker cache.
The performance claims that are validated by the artifact include:
1. End to End Applet Execution Latency (Section 7.B.3)
@@ -12,18 +12,36 @@ The functionality claims that are validated by the artifact include:
## Contents
-1. [Pre-Build](#pre-build)
-2. [Building Binaries with Docker](#build)
-3. [Setting up Trigger Shim](#trigger-shim)
-4. [Setting up Action Shim](#action-shim)
-5. [Running Spigot Benchmarks on Real Hardware](#spigot)
-6. [Running the Compiler Functionality Test using Docker](#compiler-docker)
-
+1. [Artifact Overview](#overview)
+2. [Pre-Build](#pre-build)
+3. [Building Binaries with Docker](#build)
+4. [Setting up Trigger Shim](#trigger-shim)
+5. [Setting up Action Shim](#action-shim)
+6. [Running Spigot Benchmarks on Real Hardware](#spigot)
+7. [Running the Compiler Functionality Test using Docker](#compiler-docker)
+
+## Artifact Overview
+The directory and sub-repository structure is as follows:
+- `StaticScript`: Repository containing the TypeScript to LLVM-IR compiler, based on a fork of StaticScript
+- `action-shim`: Python server emulating an action service
+- `baseline-tap`: Interpreted Node.js TAP server which is used as the performance baseline in experiments
+- `keystone`: Repository containing a fork of the Keystone framework, ported to run on StarFive VisionFive along with secure time and nonce management
+- `rule-cryptor`: Python script to encrypt an applet binary, used as part of the applet build process
+- `scripts`: Scripts for running the performance benchmarks and summarizing results
+- `static-script-runtime`: Applet runtime providing TypeScript library functionality to applets
+- `tap-apps`: Repository containing all the TypeScript applets, the code for the applet enclave, keystore, time enclave and for running an applet in a regular process
+- `tap-client`: Program to register a new user and applet with the Keystore
+- `trigger-shim`: Python server emulating a trigger event generator
+- `Dockerfile`: Dockerfile for building all the benchmark applets, running the TypeScript compiler and running all the benchmarks in Qemu
## Pre-Build (For Running using Qemu and Running on Real H/W)
+
+**This step requires 20-30 mins**
+
Provision 2 machines running Ubuntu 18.04 LTS for the trigger and action shims. Both the machines should have a publicly addressable hostname. For artifact evaluation purposes, the authors will be providing these servers.
## Building (For Building Artifact Binaries and Running Compilation Functionality Evaluation)
+
Clone the repository on the build machine using:
```
git clone https://github.com/multifacet/tap_artifact
@@ -45,6 +63,9 @@ The `/build/bin` subdirectory contains all the binaries that should be run on th
## Setting up Trigger Shim
+
+**This step requires 5-10 mins**
+
The trigger shim serves as the source of event data as well as the source of event notification for the applet enclave. The trigger data source is simulated using a python server whereas the event notifications are generated by `wrk` the HTTP performance benchmarking tool.
Clone and build `wrk` on the trigger service machine as follows:
@@ -66,6 +87,9 @@ sudo python3.7 server.py 0.0.0.0 80 --encrypted
```
## Setting up Action Shim
+
+**This step requires 5-10 mins**
+
Clone and build the trigger service repository on the trigger service machine:
```
@@ -79,6 +103,9 @@ sudo python3.7 server.py 0.0.0.0 80
```
## Running Spigot Benchmarks on Real Hardware
+
+**This step requires 5-10 mins**
+
This subsection assumes access to the StarFive VisionFive SBC (referred to as the board) preloaded with all the benchmark applets at `/home/riscv/artifact_eval`. All the enclave benchmark packages (the 10 chosen for evaluation) are located at `/home/riscv/artifact_eval/benchmarks_prebuilt`. All the files starting with `enc_rule_.ke` are Spigot applet enclave packages corresponding to the TypeScript applet \ . All the files starting with `rule_process_.ke` are Spigot benchmark packages that do not use enclaves.
The corresponding TypeScript benchmarks are located in the `tap-apps/benchmark_applets` submodule of this repository.
@@ -108,6 +135,8 @@ The Keystore will listen on port 7777 on the board. All the benchmark rules are
### Running a Spigot Benchmark (with Enclaves)
+**This step requires 40 mins - 1.5 hours**
+
Make sure that the trigger and action shims are setup and running. The trigger shim should be running with the `--encrypted` flag. Select the package for a benchmark and run it using
```
@@ -120,10 +149,10 @@ To start sending event notifications, open a terminal to the trigger shim machin
```
$ cd wrk
-$ ./wrk -c -t -d10 -s test.lua http://:80/event_notify/ > spigot_._.log
+$ ./run_wrk.sh spigot
```
-where `{(, )} can be {(1, 1), (2, 2), (3, 3)}` (See Figure 5). After every run of `wrk` restart the enclave by Ctrl^C (a few times) followed by:
+This will generate logfiles of the form `spigot_._.log` where `{(, )} are {(1, 1), (2, 2), (3, 3)}` (See Figure 5). After every run of `wrk` restart the enclave by Ctrl^C (a few times) followed by:
```
$ sudo ./enc_rule_.ke
```
@@ -134,6 +163,8 @@ After every run of `wrk`, statistics of the run are printed which are redirected
### Running a Spigot Benchmark (Without Enclaves)
+**This step requires 40 mins - 1.5 hours**
+
Make sure that the trigger and action shims are setup and running. The trigger shim should be running with the `--encrypted` flag. Select the package for a benchmark and run it using
```
@@ -146,40 +177,46 @@ To start sending event notifications, open a terminal to the trigger shim machin
```
$ cd wrk
-$ ./wrk -c -t -d10 -s test.lua http://:80/event_notify/ > spigot_base_._.log
+$ ./run_wrk.sh spigot_base
```
-where `{(, )} can be {(1, 1), (2, 2), (3, 3)}` (See Figure 5). After every run of `wrk` restart the enclave by Ctrl^C (a few times) followed by `$ sudo ./rule_process_.ke `.
+This will generate logfiles of the form `spigot_base_._.log` where `{(, )} can be {(1, 1), (2, 2), (3, 3)}` (See Figure 5). After every run of `wrk` restart the enclave by Ctrl^C (a few times) followed by `$ sudo ./rule_process_.ke `.
Verify that the run worked by checking the logs of the action service.
After every run of `wrk`, statistics of the run are printed which are redirected to `spigot_base_._.log`. The `Requests/sec:` row is taken as the throughput while the `Latency:` under thread stats is taken as the average request latency per run.
### Running the Interpreted Baseline
+
+**This step requires 40 mins - 1.5 hours**
+
Make sure that the trigger and action shims are setup and running. The trigger shim should *not* be running with the `--encrypted` flag, start the trigger shim using:
```
sudo python3.7 server.py 0.0.0.0 80
```
-`/home/riscv/baseline-tap/server.js` provides the baseline implementation of a NodeJS TAP. All the TypeScript applets are stored at `/home/riscv/baseline-tap/applets`. Line 8 of `server.js` points to the applet that is loaded, edit it to the applet that is going to be run. Lines 27 and 93 correspond to the action and trigger shim URIs respectively, edit them to point to the action and trigger shims respectively.
+`/home/riscv/artifact_eval/baseline-tap/server.js` provides the baseline implementation of a NodeJS TAP. All the TypeScript applets are stored at `/home/riscv/artifact_eval/baseline-tap/applets`. Line 8 of `server.js` points to the applet that is loaded, edit it to the applet that is going to be run. Lines 27 and 93 correspond to the action and trigger shim URIs respectively, edit them to point to the action and trigger shims respectively.
-Run the NodeJS TAP by navigating to `/home/riscv/baseline-tap` and running:
+Run the NodeJS TAP by navigating to `/home/riscv/artifact_eval/baseline-tap` and running:
```
-sudo /home/riscv/nodejs/node-v14.8.0-linux-riscv64/bin/node server.js > tap_baseline_.log
+./run_server.sh
```
+where `` is the prefix before `.json.ts` of the applet file.
+
+This will record all log messages to `tap_baseline_.log`
To start sending event notifications, open a terminal to the trigger shim machine and run:
```
$ cd wrk
-$ ./wrk -c -t -d10 -s test.lua http://:80/event_notify/ > tap_baseline_._.log
+$ ./run_wrk.sh baseline_tap
```
-where `{(, )} can be {(1, 1), (2, 2), (3, 3)}` (See Figure 5).
+This will generate logfiles of the form `baseline_tap_._.log` where `{(, )} can be {(1, 1), (2, 2), (3, 3)}` (See Figure 5).
-After every run of `wrk`, statistics of the run are printed which are redirected to ```spigot_base_._.log```. The `Requests/sec:` row is taken as the throughput while the `Latency:` under thread stats is taken as the average request latency per run.
+After every run of `wrk`, statistics of the run are printed which are redirected to ```baseline_tap_._.log```. The `Requests/sec:` row is taken as the throughput while the `Latency:` under thread stats is taken as the average request latency per run.
### Memory Usage of Enclaves
@@ -197,6 +234,8 @@ When `server.js` is launched, the `rss` field that is printed out shows the resi
## Running the Compiler Functionality Test using Docker
+**This step requires 30 mins - 40 mins**
+
This step assumes that you have built the Qemu docker container in the previous step. Run the docker container using:
```
@@ -214,6 +253,8 @@ This will try to compile all the applets into LLVM IR using [StaticScript](https
## Summarizing Performance Results
+**This step requires 20 mins - 30 mins**
+
This section describes how to reproduce the various performance claims made in the paper. Please make sure that you have Python 3.8+ installed.
### Summarizing the End to End Latency Results
diff --git a/baseline-tap b/baseline-tap
index 8686096..5a1e6ec 160000
--- a/baseline-tap
+++ b/baseline-tap
@@ -1 +1 @@
-Subproject commit 868609600241a539f78d22f3b0c55302794ad714
+Subproject commit 5a1e6ece73a075904a7d734931624cd0d60d126b