User accounts are configured during the following commands:
kayobe seed hypervisor host configure
kayobe seed host configure
kayobe overcloud host configure
The users are defined by the following variables:
seed_hypervisor_users
seed_users
controller_users
monitoring_users
The format required is described in the singleplatform-eng.users role
on Galaxy.
Any additional control plane hosts not in the controllers or monitoring
groups should define a 'users' variable.
Neutron network services are now mapped to hosts in the network group,
so it is these hosts that should be added as trunk ports to VLAN networks
by the networking-generic-switch neutron ML2 mechanism driver, rather
than the controllers.
Previously a URL based on the provisioning network IP was being advertised to
nodes during provisioning. The issue here is that the API server might not be
listening on the provisioning network. Instead we advertise the internal network
endpoint and assume that if any routes are required to enable this then they
have been created elsewhere.
These allow us to use different gateways for compute nodes on the inspection
and provisioning networks than on the control plane hosts also accessing these
networks.
In environments without Swift we are currently unable to store hardware
introspection data. The inspection_store container runs an nginx server
that supports a restricted Swift-like HTTP API using WebDAV that supports
upload and retrieval of introspection data.
This allows us to infer the kolla-ansible virtualenv as
$KAYOBE_BASE_PATH/venvs/kolla when $KOLLA_VENV isn't set.
Further work on the distinction between local and remote paths is still required.
These are group-specific, and configured via the following variables:
controller_sysctl_parameters
monitoring_sysctl_parameters
seed_sysctl_parameters
seed_hypervisor_parameters
This allows a deployer to customise their inventory at various levels, by
providing a custom inventory template for one or more of the sections of the
inventory.
* Top level groups define the roles of hosts, e.g. controller or compute.
* Components define groups of services, e.g. nova or ironic.
* Services define single containers, e.g. nova-compute or ironic-api.
This allows for the full set of interfaces to be overridden by setting one
of these variables, rather than simply extending the default list via
<type>_extra_network_interfaces.
Previously, the external network carried both public API traffic and
neutron external network traffic. In some cases is it useful to separate
these networks. The public network now carries the public API traffic,
leaving the external network to carry neutron external network traffic
alone. For backwards compatibility, the public network defaults to the
external network.
To avoid a reboot when running the kayobe scripts, we disable selinux
before running the scripts. We do this using the provision reload plugin
that allows us to do a vagrant reload while provisioning the VM.
When doing vagrant halt then vagrant up we want the system to keep
working. The easiest way to do this is to use a virtual box plugin to
install the tools, then use the tools to sync the /vagrant directory,
rather than falling back to rsync on every boot of the VM. The rsync
looses all the writes since the last boot, forcing a full reprovision.
The CLI command is:
kayobe overcloud introspection data save [--output-dir <dir>] [--output-format <format>]
This command will save introspection data collected by the seed host's ironic
inspector service to the control host for analysis.
The MichaelRigart.interfaces role has now been updated to support more complex
network topologies, including VLAN subinterfaces of bridges, and bridges with
a bonded interface as a port.
Rather than specifying kernel command line arguments directly, configuration of
IPA introspection data collectors and benchmarks is now possible by extending
lists of collector (ipa_collect_extra) and benchmark (ipa_benchmark_extra)
names. LLDP collection is now controlled via a flag, ipa_collect_lldp.
Additional kernel arguments may be passed via ipa_kernel_options_extra.
Overcloud deployment images can now be built via:
kayobe overcloud deployment image build
This should be done prior to running kayobe overcloud service deploy.
In order to build IPA images, the ipa_build_images variable should be
set to True. In this case, these images will be used by the overcloud's
ironic inspector service during hardware inspection, and by ironic
during provisioning.
The CLI command is:
kayobe seed deployment image build
This command will build Ironic Python Agent (IPA) kernel and ramdisk images
using the Diskimage Builder (DIB) ironic-agent element. The built images will
be copied to the appropriate location in the bifrost_deploy container on the
seed.
This allows us to build a customised image with site- or hardware- specific
extensions.