Testing


This page is under construction

TODO: import document D11.2

Unit Testing

The article Build system conventions covers topics on which tools to use in order to enable that the project is capable of being tested manually or in a continuous integration system.

Tests Coverage

Integration Testing

Information on how to kick-start with integration testing of existing Contrail components is provided here. First, we explain what components the testing environment consists of. Description of test directory skeleton and details on test scripts are provided next. Reading this should give the developer enough info on how to start with integration testing.

Intro

The purpose of integration testing is to verify that individual software components produced by the Contrail project integrate flawlessly and correctly into subsystems, and that such subsystems integrate correctly into larger ones.

Testing environment

Contrail is a distributed system, and thus integration testing environment consists of a number of physical hosts, running Contrail software.

In the first phase of each integration test, a proper environment must be set-up. This set-up consists of:

  • preparing images for individual physical hosts,
  • booting physical hosts.

After physical hosts are booted, ssh connections are used to issue integration testing commands on these hosts.

Physical infrastructure for integration testing is provided by an 11-node cluster at XLAB: a head node takes care of executing testing procedures (running under Jenkins continuous integration server), while 10 diskless worker nodes (disks are NFS-mounted, if necessary) provide the nodes that tests require to run the tested software on.

Host image preparation

Perceus provisioning tool is used to create host images (named VNFS capsules in Perceus lingo) and boot physical hosts.

Preparing a host image consists of:

  • picking a base image,
  • installing additional software on top of the base image,
  • configuring the software.

Once a host image is prepared, Perceus will boot the node with it.

Base images

We provide the following base images:

  • a minimal debian system with only ssh enabled (debian-minimal),
  • a debian system with an OpenNebula head node installation (one-2.2-head),
  • a debian system with an OpenNebula worker node installation (one-2.2-node).

If the customization of existing base images is complex, or the same image is used for many integration tests, it might make sense to include such an image among the base ones in order to relieve the testing infrastructure of the related load. Consult Perceus documentation on how to prepare host images.

Note that you may not install Contrail software packages in the base images: those are supposed to be tested and thus the version being tested should be installed every time.

Additional software

Additional software installed can either be distribution's native packages or contrail packages that have successfully passed unit testing on Bamboo, have been packed in the form of binary tarballs. The former are installed using the distribution's native mechanisms (only debian-based distributions are supported so far, rpm support will be provided in the future).

Configuration

Configuration can be provided in two forms:

  • tarballs that are unpacked into root of the host image filesystem: these should contain files with their full paths.
  • executable scripts that are run chrooted into the host image filesystem and may do just about anything.

Node description

Every node is described with the following file hierarchy:

node-name
 +-- install.xml
 +-- config
 |     +-- some-config.tar.gz
 |     +-- ...
 +-- scripts
 |      +-- some-script.sh
 |      +-- another-script.py
 |      +-- ...
 +-- scripts-post-boot
        +-- some-script.sh
        +-- another-script.sh
        +-- ...

install.xml

An example install.xml file looks like this:

<?xml version="1.0"?>
<node type="vnfs" base="one-2.2-head-jc.aufs"
 repository_contrail="http://repository.ow2.org/nexus/service/local/repositories/snapshots/content/org/ow2/contrail"
 repository_targz="http://contrail.xlab.si/packages-targz"
 xmlns="http://contrail-project.eu/schema/2011/08/install">
<!--
Do not install services/deamons here - they will run in chroot environment.
sun-java6-jre - partner repo
-->

<upstream>openjdk-6-jre-headless</upstream>
<contrail path="common/headNodeRestProxy" file="headNodeRestProxy"></contrail>
<contrail path="infrastructure_monitoring/one-monitor" file="one-monitor"></contrail>
<contrail path="infrastructure_monitoring/one-sensor" file="one-sensor"></contrail>
</node>

The above will use base image debian-minimal, and install fortunes debian package on it, as well as Contrail binary package named test.tar.gz.

Config

Every tarball in the config directory will be unpacked into the filesystem of the host image being prepared.

Thus, if the tarball some-config.tar.gz contains files etc/hosts and etc/hostname, these will be unpacked to
/etc/hosts and /etc/hostname in the host image filesystem, respectively.

Config tarballs will be unpacked after base image is amended with distribution and contrail packages as specified in install.xml files.

Scripts

Every file in the scripts directory with the executable bit set will be executed chrooted in the host image file system.

Thus, if you want to add user contrail to the host image, and change ownership of a directory to that user, the script should read somewhat along the lines of:

#!/bin/sh

adduser --system --disabled-password --disabled-login contrail
chown contrail /var/lib/contrail

Scripts are executed after config tarballs are unpacked.

Note: as the scripts are executed chrooted in the host image file system, you may only invoke whatever software is installed in the host image already.

Scripts-post-boot

Every file in the scripts-post-boots directory will be copied the host image file system into the same directory as designated under script-post-boot directory.

For example, when script1.sh needs to be copied under /root/contegrator-test/script1.sh on a head node, developer has to provide next directory structure to include script1.sh with his test.

├── head
│   ├── config
│   ├── install.xml
│   ├── scripts
│   │   └── fix-scripts.sh
│   └── scripts-post-boot
│       └── root
│           └── contegrator-test
                 └── script1.sh

After provisioning of the head node head during the integration test, script1.sh is copied to desired directory.

Test execution

Every integration test consists of at least one test module, and each module contains at least one test case.

A test modules is a python source file, ending with ".py" extension, and each test case in such a module is a top-level function whose name starts with test_.

Each such function is passed two parameters:

  • ct_node_list: a map, mapping node (directory) names as specified in node description to the host names that have been assigned to the nodes.
  • ct_nodeman: an instance of NodeMan class, that allows for simple running of commands via ssh connection to the nodes.

Running a command over ssh on a node named "head" in your test case is as simple as invoking:

ct_nodeman.sshrun("root", ct_node_list["head"], "echo 'Here I am!")

A simple test case starting OpenNebula service on the head node, creating two ON hosts, publishing an ON vnet, and executing a testing script looks like this:

def test_stress(ct_node_list, ct_nodeman):
    head = ct_node_list["head"]
    worker = ct_node_list["worker"]
   try:
        ct_nodeman.sshrun("root", head, "sleep 30")
        ct_nodeman.sshrun("root", head, "service opennebula restart")
        ct_nodeman.sshrun("root", head, "sleep 30")
        ct_nodeman.sshrun("root", head, "onehost create " + head + " im_kvm vmm_kvm tm_nfs")
        ct_nodeman.sshrun("root", head, "onehost create " + worker + " im_kvm vmm_kvm tm_nfs")
        ct_nodeman.sshrun("root", head, "onevnet publish 205")
        ct_nodeman.sshrun("root", head, "/usr/bin/one-test-run.sh")
   finally:
       pass

Putting it all together

An integration test is specified by the following file tree:

integration-test-name
 +-- nodes
 |     +-- node-name-1
 |     |     +-- install.xml
 |     |     +-- config
 |     |      |    +-- ...
 |     |     +-- scripts
 |     |           +-- ...
 |     +-- ...
 +-- test
       +-- test-module-1.py
       +-- ...

Resources

Test examples:

The NodeMan class (API provided by ct_nodeman parameter to your test case implementations):

Location for integration-test scripts:

Performance Testing

Manual Testing

 
tst