agent instance.
Issue: Some of our complex test requires to reboot compute nodes. In
that scenario, if agent instances are on those computes then those
instances will also be rebooted and running shaker test(s) will be
stopped.
At this time this commit is specifically for shaker's shell class
execution as it writes execution script in /tmp directory and we
know that if system/instance is being rebooted then whatever /tmp
directory contains that will be wiped out. So, if user has a choice
to copy that execution script in some specific directory which is
not being impacted (i.e /mnt, /opt etc) after reboot then through
custom test scripts above condition can be handled
Change-Id: I41f824561e36329d07afc9e798eab44b72aeb01a
The current cfg cleanup_on_error is a bit of a misnomer, since cleanup
is not always dependent on there being an error.
This commit updates the cleanup cfg parameter to better reflect when
cleanup actually happens - cleanup_on_exit
Change-Id: I984e4b660705be8afa50a1e8605102832fd802dd
This commit enhances Shaker to allow:
- specifying a custom stack name, instead of always using a uniquely
generated one. This helps with tracking test stacks in CICD and/or
allowing users to be more descriptive when looking at running heat
stacks
- reusing an existing test stack, rather than re-stacking. This
is to allow the following scenario: Execute scenario A (setting
cleanup_on_error = false) to establish baseline data. Perform some
update in Openstack (config change, cert rotation, etc). Reuse the
existing VMs from the baseline test to re-execute scenario A to
help ensure the update had no impact.
Change-Id: Ifbdd332a44ca54f06cc81d9018ea5cea26c32416
There are 3 recently added config options that have underscores instead
of dashes:
* custom_user_opts (added in I96b6e578eb59813e5e0c8a2fe7a14c5ecc369be7)
* scenario_availability_zone and scenario_compute_nodes
(both added in I5459150a0eac9bed6a6a62a126bd9ec0648941fe)
The rest of options are declared with words separated by dashes. This patch
makes name style consistent across the project. The change affects CLI only.
Change-Id: I9d799827a36682fbe7ae73b22bcbaaf2f05afd6e
Issue: Some of our more complex tests require injecting "custom" data
into the heat stacks in order to run correctly. For example in order to
run certain Contrail based tests we need to set "contrail-asn:(some
asn number)". The asn number varies between cloud deployments, so we
dynamically set this field in a Heat environment file. However this is
a very specific field that other Shaker users might not need or
understand it, so this commit allows for a more generic approach.
This commit aims to let the user specify "custom" user defined options
similar to the matrix config parameter. Since this opts is not
directly referenced anywhere in code it's main use is to inject data
into heat environment files. The YAML format allows flexibility for any
object the user might want to add during their scenario
run or it can be left blank without any issues.
Change-Id: I96b6e578eb59813e5e0c8a2fe7a14c5ecc369be7
Issue: At AT&T we have large complex test stacks that make putting
everything into a single heat template and environment file very
cumbersome. Large monolithic templates make it harder to debug failures,
maintain, extend, and organize these tests. In order to solve this issue
we have enhanced Shaker to support specifying support templates with
environment files.
This commit enhances Shaker to add the ability to define
support_templates with env_files in test definitions.
Support templates spin up "support type" resources before the actual
test stack is spun up. This could range from networks, to volumes, to
anything Heat can create. The support resources do not have any reliance
on resources created in the test stack, they set up a "foundation" for
the test stack. The test stack can then reference these resources by
name. (e.g. assume they exist by the time the test stack is spun up)
While the example provided with this commit is simple, and the support
networks that get created are not directly used in the test, it
shows the basic principles of how support templates work.
As a real-world example and to give an idea of the complexity this
enhancement is trying to solve, we have a test definition that looks
like this:
support_templates:
-
Base:
template: templates/module_1_base.yaml
env_file: env/module_1_base.env
-
SI_L2:
template: templates/module_2_si_l2.yaml
env_file: env/module_2_si_l2.env
-
SI_L3:
template: templates/module_3_si_l3.yaml
env_file: env/module_3_si_l3.env
template: templates/module_4_master_servant.yaml
env_file: env/module_4_master_servant.env
The first support stack (module_1) sets up some "base" network
resources. This stack provides some network resources used by the SI_L2
and SI_L3 support stacks.
SI_L2 is a support stack with 2 VMs that do Contrail service chaining
on an L2 network.
SI_L3 is a support stack with 2 VMs that do Contrail service chaining
on an L3 network.
Then the test stack (module 4) gets spun up on N amount of computes and
runs traffic across the SI_L2 and SI_L3 service chained networks.
After the test run all stacks are cleaned up
Using the concept of support stacks allows us to beter organize and
maintain our complex tests and allows for faster debugging due to the
"layered" nature of the setup.
Support templates also allow us to spin up more Shaker test threads
that use the same support templates simultaneously to better simulate
real-world network traffic. It also reduces the set up time of certain
tests we have since the support stacks already exist.
This enhancement does not alter existing Shaker functionality and
is fully backwards compatible.
Change-Id: Ife51bc55874c6ec4faac221bab8f9f0eea175fdc
Scenarios under `test/` subfolder are used for integration testing
or as samples. Skip them in CLI help and keep only production-ready.
Change-Id: I69a428480055c825aa3b590f02b4c09b346a6012
Problem: We have large compute deployments (400+ computes) but different
groups are constantly working in the same lab/deployments so it becomes
unrealistic to try and run tests using all computes because the Shaker
tests could disrupt other work. I know the accomdation is set in test
definitions, and they can changed, but that is also cumbersome when
you're running a large number of tests in a single run.
This commit aims to solve the problem by allowing Shaker users to have
more control over the availaibity_zone and compute_nodes accommodation
by specifying the fields in either the cfg input file or as cli
arguments.
This change would allow Shaker to "override" what's been set in the test
definition and act like a "global" setting for all tests in a
single run to use a particular AZ and/or fixed number of computes_nodes
without having to alter N amount of tests.
Change-Id: I5459150a0eac9bed6a6a62a126bd9ec0648941fe
At AT&T our Openstack Airship deployments use identity api v3, but
require the os-interface parameter to be passed, in order to correctly
authenticate.
This commit adds support for specifying and using os-interface in Shaker
Change-Id: Icad7fc823dc302632bebe5d18f5cfe34bcc00b06
OpenStack profiling can be enabled by setting parameter --os-profile.
The feature requires osprofiler library. If library is not present
a warning message is shown.
Change-Id: I2c1b0cbd99450236b3720e19700f99cfdc14e378
Closes-Bug: 1666585
This commit adds image builder support for CentOS by generalizing the
image_elements using package-installs.yaml ad pkg-map. A few other
executables have been modified to ensure compatibility. Ubuntu is
kept as default.
Change-Id: I4122155d82ad64867efdb6d88536624f677c207d
Shaker-image-builder can build images in one of two modes:
* "heat" - classic approach with help of Heat, but it also
requires Glance v1
* "dib" - make local image using diskimage-builder and then
upload into Glance
The mode can be selected automatically (falls to "dib" if no
Glance v1 present) or specified manually (--image-builder-mode).
Change-Id: Ibbcb99e85012ecab012612dccea38d40100625f5
* Deprecate pyshaker-agent and use full pyshaker package instead.
* Upgrade Ubuntu image to Xenial and remove Trusty-based
Change-Id: I01762f648a79cc5e90eb55e757975563ed1fc06d
Option --scenario now accepts list of scenarios. They are executed
one-by-one, results are aggregated and stored in a single json
file or report. New option --artifacts-dir allows to store individual
results as well.
Change-Id: If9b8f0650635a8e2534f757de158a24461f045ba
Now Shaker can be run in Docker container! The container
does the whole work: it creates image, runs the scenario
and cleans everything up.
Change-Id: I9cbcb7e79d08f9526cf1ee8f3ccafab34d3935c2