Skip to content

Setup execution platform (vim emu)

Manuel Peuster edited this page Nov 28, 2018 · 31 revisions

This guide describes how to setup a vim-emu-based execution platform to be used as execution target by tng-sdk-benchmark. The entire setup process in completely automated using Ansible and assumes to be executed against a fresh Ubuntu 16.04 installation.

Overview

A typical setup consists of two machines (bare metal or VM) and looks like this:

+------------------------+       +----------------------------+
| +--------------------+ |       | +------------------------+ |
| |                    | |       | |                        | |
| |     tng-bench      | |       | |    tng-bench-emusrv    | |
| |(experiment control)|-+-------+-> (vim-emu w. ctrl. API) | |
| |                    | |       | |                        | |
| +--------------------+ |       | +------------------------+ |
|                        |       |                            |
|                        |       |     Machine 2: Target      |
| Machine 1 (tng-bench)  |       |(vim-emu execution platform)|
+------------------------+       +----------------------------+
  • Machine 1: This machine runs tng-sdk-benchmark and manages and controls the benchmarking experiments executed on Machine 2. To do so it needs a network connection to Machine 2 (SSH and TCP 4998:5002).

  • Machine 2: This machine acts as executor for the profiling experiments. In the shown example, vim-emu is used as execution environment. In this case, a vim-emu instance is creates by the tool tng-bench-emusrv. This tool offers a REST API to control experiment deployments on top of vim-emu.

Note: This installation guide describes the automated remote installation of Machine 2 using a Ansible playbook executed on Machine 1. For installation instructions for Machine 1, please look here (TODO).

Requirements and Assumptions

  • Machine 1:
    • Ansible installed
    • Git installed
    • tng-sdk-benchmark installed (TODO add link to installation instructions)
  • Machine 2 (installation target):
    • Ubuntu 16:04 LTS (fresh!)
    • SSH access

NOTE 1: Do not install on a machine that is already in use for other stuff! The installation does some system reconfigurations, e.g., firewall, that might terribly break Machine 2.

NOTE 2: Only install on machines in a privat network/lab environment. The vim-emu test execution machine will open control ports to the public without any authentication mechanisms, e.g., Docker. It will also run the emulator as root user. All this can cause security risks! Only use it if you know what you are doing.

Installation

1. Preparations

Make sure Machine 1 can connect to Machine 2 via SSH. And that git and ansible are installed on Machine 1.

2. Clone tng-sdk-benchmark

# on Machine 1 do:
git clone https://github.com/sonata-nfv/tng-sdk-benchmark.git

3. Configure Ansible

Check the Ansible Documentation to learn how to properly configure target hosts in ansible.

# on Machine 1 do:
cd tng-sdk-benchmark/node-installers
# add your target (Machine 2) to the hosts.yml
vim hosts.yml

Test your ansible configuration:

ansible vim-emu-nodes -i hosts.yml -m ping

Should give something like:

testvm | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

4. Run Ansible installer

# on Machine 1 do:
ansible-playbook --ask-become-pass -i hosts.yml node-vim-emu.yml

The installation might take 30 minutes.

5. Verify your installation

To check the installation, use SSH to connect to your target machine on which the platform was installed (Machine 2).

# on Machine 2 do:
sudo screen -r  # (the tng-bench-emusrv server is running in a screen session)

You should see something like:

2018-11-28 15:38:39 testvm tngsdk.benchmark.pdriver.vimemu.server[10391] INFO Starting tng-bench-emusrv server ... CTRL+C to exit.

Done!

Configuration

The tng-sdk-benchmark tool on Machine 1 now needs to know where it can find our freshly installed execution platform. To achieve this, go to the tng-sdk-benchmark folder and modify the [config.yml](https://github.com/sonata-nfv/tng-sdk-benchmark/blob/master/config.yml) as follows:

#
# tng-sdk-benchmark configuration file
#
---
# list of target platform for bench. execution
targets:
  - name: default
    description: "vim-emu on machine 2"
    pdriver: vimemu  # type of target (vimemu, osm)
    pdriver_config:
      host: <HOST_OR_IP_OF_MACHINE_2>  # <-- change here
      emusrv_port: 4999
      llcm_port: 5000
      docker_port: 4998

Usage and Test

Finally you can run a first benchmarking experiment which is shipped as an example together with tng-sdk-benchmark. The example looks like this:

+-----------+    +-----------------+    +-----------+
| mp.input  |--->|  Suricata IDS   |--->| mp.output |
+-----------+    +-----------------+    +-----------+

To run it, run the following command on Machine 1:

# on Machine 1 (in tng-sdk-benchmark/) do:
tng-bench -p examples/peds/ped_suricata_tp_small.yml

The terminal output should look like this:

2018-11-28 19:59:44 mapupb.local tngbench.tngsdk.benchmark[7793] INFO 5GTANGO benchmarking/profiling tool initialized
2018-11-28 19:59:44 mapupb.local tngbench.tngsdk.benchmark[7793] INFO Loaded PED file '/Users/manuel/tango/tng-sdk-benchmark/examples/peds/ped_suricata_tp_small.yml'.
2018-11-28 19:59:44 mapupb.local tngbench.tngsdk.benchmark.experiment[7793] INFO Populated experiment specification: 'service_throughput' with 1 configurations to be executed.
2018-11-28 19:59:44 mapupb.local tngbench.tngsdk.benchmark.generator.tango[7793] INFO New 5GTANGO service configuration generator
2018-11-28 19:59:44 mapupb.local tngbench.tngsdk.benchmark.generator.tango[7793] INFO Generating 1 service experiments using /Users/manuel/tango/tng-sdk-benchmark/examples/peds/../services/ns-1vnf-ids-suricata
2018-11-28 19:59:45 mapupb.local tngbench.tngsdk.benchmark.generator.tango[7793] INFO Generating 1 projects for Experiment(service_throughput)
2018-11-28 19:59:45 mapupb.local tngbench.tngsdk.benchmark.generator.tango[7793] INFO Generated project (1/1): service_throughput_00000.tgo
--------------------------------------------------------------------------------
5GTANGO tng-bench: Experiment generation report
--------------------------------------------------------------------------------
Generated packages for 1 experiments with 1 configurations.
Total time: 1.6818
--------------------------------------------------------------------------------
 19:59:45 executor[7793] INFO Initialized executor with 1 experiments and [1] configs
 19:59:45 pdriver.vimemu[7793] INFO Initialized VimEmuDriver with {'host': '172.0.0.120', 'emusrv_port': 4999, 'llcm_port': 5000, 'docker_port': 4998}
 19:59:45 executor[7793] INFO Preparing target platforms
 19:59:45 executor[7793] INFO Executing experiments
 19:59:45 executor[7793] INFO Setting up 'ExperimentConfiguration(service_throughput_00000)'
 19:59:45 pdriver.vimemu.emuc[7793] INFO Waiting for emulator LLCM ... 0/60
 19:59:47 pdriver.vimemu.emuc[7793] INFO Waiting for emulator LLCM ... 1/60
 19:59:49 pdriver.vimemu.emuc[7793] INFO Waiting for emulator LLCM ... 2/60
 19:59:49 pdriver.vimemu.emuc[7793] INFO Emulator LLCM ready
 19:59:49 pdriver.vimemu.emuc[7793] INFO On-boarding to LLCM: /var/folders/yx/lvxqrl7j7954pkz6mmsh72br0000gn/T/tmp8uyzaiss/gen_pkgs/service_throughput_00000.tgo
 19:59:50 pdriver.vimemu.emuc[7793] INFO Instantiating NS: f709c4df-1e5d-4a2b-96f2-b3cc3426b2b6
 19:59:53 pdriver.vimemu[7793] INFO Instantiated service: 71ae0af6-8167-4ea3-8d80-28bdfb2d8000
 19:59:53 executor[7793] INFO Executing 'ExperimentConfiguration(service_throughput_00000)'
 20:00:01 pdriver.vimemu[7793] INFO Collecting experiment results ...
 20:00:01 pdriver.vimemu[7793] INFO Finalized 'ExperimentConfiguration(service_throughput_00000)'
Wait for user input...

 20:00:03 executor[7793] INFO Teardown 'ExperimentConfiguration(service_throughput_00000)'
 20:00:06 executor[7793] INFO Teardown target platforms
 20:00:06 helper[7793] INFO Downloading: https://raw.githubusercontent.com/mpeuster/vnf-bench-model/dev/experiments/vnf-br/templates/vnf-bd.yaml
 20:00:06 7793] INFO Prepared 1 result processor(s)
 20:00:06 7793] INFO Running result processor '<tngsdk.benchmark.resultprocessor.ietfbmwg.IetfBmwgResultProcessor object at 0x104a4bf60>'
 20:00:06 resultprocessor.ietfbmwg[7793] INFO IETF BMWG BD dir not specified (--ibbd). Skipping.

Finally, a folder with results should be produces in the tng-sdk-benchmark folder:

results/
└── service_throughput_00000
    ├── cmon.json
    ├── mn.mp.input
    │   ├── clogs.log
    │   └── tngbench_share
    │       ├── cmd_start.log
    │       └── cmd_stop.log
    ├── mn.mp.output
    │   ├── clogs.log
    │   └── tngbench_share
    │       ├── cmd_start.log
    │       └── cmd_stop.log
    └── mn.vnf0
        ├── clogs.log
        └── tngbench_share

Checking results/service_throughput_00000/mn.mp.input/tngbench_share/cmd_start.log shows you the performance measured performance on the input probe:

------------------------------------------------------------
Client connecting to 20.0.0.254, TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  3] local 20.0.0.1 port 56872 connected with 20.0.0.254 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 3.0 sec  1.28 GBytes  3.65 Gbits/sec

Congratulations, you did your first fully automated profiling experiment using tng-sdk-benchmark.

FAQ

Nothing yet.

Clone this wiki locally