
This patch addresses various sections that referred to releases Juno and earlier, and removes them or edits them to be up-to-date. As debian was removed from the install guide this cycle, references to the debian install guide have also been removed here. Change-Id: If616c2af0f9e9f79e7c74c37e5f82994bb8b26a2
1196 lines
44 KiB
XML
1196 lines
44 KiB
XML
<?xml version="1.0" encoding="UTF-8"?>
|
|
<chapter version="5.0" xml:id="customize"
|
|
xmlns="http://docbook.org/ns/docbook"
|
|
xmlns:xlink="http://www.w3.org/1999/xlink"
|
|
xmlns:xi="http://www.w3.org/2001/XInclude"
|
|
xmlns:ns5="http://www.w3.org/2000/svg"
|
|
xmlns:ns4="http://www.w3.org/1998/Math/MathML"
|
|
xmlns:ns3="http://www.w3.org/1999/xhtml"
|
|
xmlns:ns="http://docbook.org/ns/docbook">
|
|
<?dbhtml stop-chunking?>
|
|
|
|
<title>Customization</title>
|
|
|
|
<para>OpenStack might not do everything you need it to do out of the box. To
|
|
add a new feature, you can follow different paths.<indexterm
|
|
class="singular">
|
|
<primary>customization</primary>
|
|
|
|
<secondary>paths available</secondary>
|
|
</indexterm></para>
|
|
|
|
<para>To take the first path, you can modify the OpenStack code directly.
|
|
Learn <link xlink:href="https://wiki.openstack.org/wiki/How_To_Contribute">how to contribute</link>,
|
|
follow the <link xlink:href="https://wiki.openstack.org/wiki/GerritWorkflow">code review
|
|
workflow</link>, make your changes, and contribute them back to the upstream
|
|
OpenStack project. This path is recommended if the feature you need requires
|
|
deep integration with an existing project. The community is always open to
|
|
contributions and welcomes new functionality that follows the
|
|
feature-development guidelines. This path still requires you to use DevStack
|
|
for testing your feature additions, so this chapter walks you through the
|
|
DevStack environment.<indexterm class="singular">
|
|
<primary>OpenStack community</primary>
|
|
|
|
<secondary>customization and</secondary>
|
|
</indexterm></para>
|
|
|
|
<para>For the second path, you can write new features and plug them in using
|
|
changes to a configuration file. If the project where your feature would
|
|
need to reside uses the Python Paste framework, you can create middleware
|
|
for it and plug it in through configuration. There may also be specific ways
|
|
of customizing a project, such as creating a new scheduler driver for
|
|
Compute or a custom tab for the dashboard.</para>
|
|
|
|
<para>This chapter focuses on the second path for customizing OpenStack by
|
|
providing two examples for writing new features. The first example shows how
|
|
to modify Object Storage (swift) middleware to add a new feature, and the
|
|
second example provides a new scheduler feature for OpenStack Compute
|
|
(nova). To customize OpenStack this way you need a development environment.
|
|
The best way to get an environment up and running quickly is to run DevStack
|
|
within your cloud.</para>
|
|
|
|
<section xml:id="devstack">
|
|
<title>Create an OpenStack Development Environment</title>
|
|
|
|
<para>To create a development environment, you can use DevStack. DevStack
|
|
is essentially a collection of shell scripts and configuration files that
|
|
builds an OpenStack development environment for you. You use it to create
|
|
such an environment for developing a new feature.<indexterm
|
|
class="singular">
|
|
<primary>customization</primary>
|
|
|
|
<secondary>development environment creation for</secondary>
|
|
</indexterm><indexterm class="singular">
|
|
<primary>development environments, creating</primary>
|
|
</indexterm><indexterm class="singular">
|
|
<primary>DevStack</primary>
|
|
|
|
<secondary>development environment creation</secondary>
|
|
</indexterm></para>
|
|
|
|
<para>You can find all of the documentation at the <link
|
|
xlink:href="http://devstack.org/">DevStack</link> website.</para>
|
|
|
|
<procedure>
|
|
<title>To run DevStack for the stable Havana branch on an instance in
|
|
your OpenStack cloud:</title>
|
|
|
|
<step>
|
|
<para>Boot an instance from the dashboard or the nova command-line
|
|
interface (CLI) with the following parameters:</para>
|
|
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para>Name: devstack-havana</para>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para>Image: Ubuntu 12.04 LTS</para>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para>Memory Size: 4 GB RAM</para>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para>Disk Size: minimum 5 GB</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
|
|
<para>If you are using the <code>nova</code> client, specify
|
|
<code>--flavor 3</code> for the <code>nova boot</code> command to get
|
|
adequate memory and disk sizes.</para>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Log in and set up DevStack. Here's an example of the commands
|
|
you can use to set up DevStack on a virtual machine:</para>
|
|
|
|
<substeps>
|
|
<step>
|
|
<para>Log in to the instance:
|
|
|
|
<screen><prompt>$</prompt> <userinput>ssh <replaceable>username</replaceable>@<replaceable>my.instance.ip.address</replaceable></userinput></screen></para>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Update the virtual machine's operating system:
|
|
|
|
<screen><prompt>#</prompt> <userinput>apt-get -y update</userinput></screen></para>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Install git:
|
|
|
|
<screen><prompt>#</prompt> <userinput>apt-get -y install git</userinput></screen></para>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Clone the stable/havana branch of the
|
|
<literal>devstack</literal> repository:
|
|
|
|
<screen><prompt>$</prompt> <userinput>git clone https://github.com/openstack-dev/devstack.git -b
|
|
stable/havana devstack/</userinput></screen></para>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Change to the <literal>devstack</literal> repository:
|
|
|
|
<screen><prompt>$</prompt> <userinput>cd devstack</userinput></screen></para>
|
|
</step>
|
|
</substeps>
|
|
</step>
|
|
|
|
<step performance="optional">
|
|
<para>(Optional) If you've logged in to your instance as the root
|
|
user, you must create a "stack" user; otherwise you'll run into
|
|
permission issues. If you've logged in as a user other than root, you
|
|
can skip these steps:</para>
|
|
|
|
<substeps>
|
|
<step>
|
|
<para>Run the DevStack script to create the stack user:</para>
|
|
|
|
<screen><prompt>#</prompt> <userinput>tools/create-stack-user.sh</userinput></screen>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Give ownership of the <literal>devstack</literal> directory
|
|
to the stack user:</para>
|
|
|
|
<screen><prompt>#</prompt> <userinput>chown -R stack:stack /root/devstack</userinput></screen>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Set some permissions you can use to view the DevStack screen
|
|
later:</para>
|
|
|
|
<screen><prompt>#</prompt> <userinput>chmod o+rwx /dev/pts/0</userinput></screen>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Switch to the stack user:</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>su stack</userinput></screen>
|
|
</step>
|
|
</substeps>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Edit the <filename>localrc</filename> configuration file that
|
|
controls what DevStack will deploy. Copy the example
|
|
<filename>localrc</filename> file at the end of this section (<xref
|
|
linkend="localrc" />):</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>vim localrc</userinput></screen>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Run the stack script that will install OpenStack:
|
|
|
|
<screen><prompt>$</prompt> <userinput>./stack.sh</userinput></screen></para>
|
|
</step>
|
|
|
|
<step>
|
|
<para>When the stack script is done, you can open the screen session
|
|
it started to view all of the running OpenStack services:
|
|
|
|
<screen><prompt>$</prompt> <userinput>screen -r stack</userinput></screen></para>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Press <keycombo>
|
|
<keycap>Ctrl</keycap>
|
|
|
|
<keycap>A</keycap>
|
|
</keycombo> followed by 0 to go to the first
|
|
<literal>screen</literal> window.</para>
|
|
</step>
|
|
</procedure>
|
|
|
|
<note>
|
|
<para><itemizedlist>
|
|
<listitem>
|
|
<para>The <code>stack.sh</code> script takes a while to run.
|
|
Perhaps you can take this opportunity to <link
|
|
xlink:href="https://www.openstack.org/join/">join the OpenStack
|
|
Foundation</link>.</para>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para><literal>Screen</literal> is a useful program for viewing
|
|
many related services at once. For more information, see the <link
|
|
xlink:href="http://aperiodic.net/screen/quick_reference">GNU screen quick
|
|
reference</link>.</para>
|
|
</listitem>
|
|
</itemizedlist></para>
|
|
</note>
|
|
|
|
<para>Now that you have an OpenStack development environment, you're free
|
|
to hack around without worrying about damaging your production deployment.
|
|
<xref linkend="localrc" /> provides a working environment for running
|
|
OpenStack Identity, Compute, Block Storage, Image service, the OpenStack
|
|
dashboard, and Object Storage with the stable/havana branches as the
|
|
starting point.</para>
|
|
|
|
<example xml:id="localrc">
|
|
<title>localrc</title>
|
|
|
|
<programlisting language="bash"><?db-font-size 75%?># Credentials
|
|
ADMIN_PASSWORD=devstack
|
|
MYSQL_PASSWORD=devstack
|
|
RABBIT_PASSWORD=devstack
|
|
SERVICE_PASSWORD=devstack
|
|
SERVICE_TOKEN=devstack
|
|
|
|
# OpenStack Identity Service branch
|
|
KEYSTONE_BRANCH=stable/havana
|
|
|
|
# OpenStack Compute branch
|
|
NOVA_BRANCH=stable/havana
|
|
|
|
# OpenStack Block Storage branch
|
|
CINDER_BRANCH=stable/havana
|
|
|
|
# OpenStack Image service branch
|
|
GLANCE_BRANCH=stable/havana
|
|
|
|
# OpenStack Dashboard branch
|
|
HORIZON_BRANCH=stable/havana
|
|
|
|
# OpenStack Object Storage branch
|
|
SWIFT_BRANCH=stable/havana
|
|
|
|
enable_service swift
|
|
|
|
# Object Storage Settings
|
|
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
|
|
SWIFT_REPLICAS=1
|
|
|
|
# Block Storage Setting
|
|
VOLUME_BACKING_FILE_SIZE=20480M
|
|
|
|
# Output
|
|
LOGFILE=/opt/stack/logs/stack.sh.log
|
|
VERBOSE=True
|
|
LOG_COLOR=False
|
|
SCREEN_LOGDIR=/opt/stack/logs</programlisting>
|
|
</example>
|
|
</section>
|
|
|
|
<section xml:id="swift_middleware_example">
|
|
<title>Customizing Object Storage (Swift) Middleware</title>
|
|
|
|
<para>OpenStack Object Storage, known as swift when reading the code, is
|
|
based on the Python <link
|
|
xlink:href="http://pythonpaste.org/">Paste</link> framework. The best
|
|
introduction to its architecture is <link
|
|
xlink:href="http://pythonpaste.org/do-it-yourself-framework.html">A Do-It-Yourself Framework</link>.
|
|
Because of the swift project's use of this framework, you are able to add
|
|
features to a project by placing some custom code in a project's pipeline
|
|
without having to change any of the core code.<indexterm class="singular">
|
|
<primary>Paste framework</primary>
|
|
</indexterm><indexterm class="singular">
|
|
<primary>Python</primary>
|
|
</indexterm><indexterm class="singular">
|
|
<primary>swift</primary>
|
|
|
|
<secondary>swift middleware</secondary>
|
|
</indexterm><indexterm class="singular">
|
|
<primary>Object Storage</primary>
|
|
|
|
<secondary>customization of</secondary>
|
|
</indexterm><indexterm class="singular">
|
|
<primary>customization</primary>
|
|
|
|
<secondary>Object Storage</secondary>
|
|
</indexterm><indexterm class="singular">
|
|
<primary>DevStack</primary>
|
|
|
|
<secondary>customizing Object Storage (swift)</secondary>
|
|
</indexterm></para>
|
|
|
|
<para>Imagine a scenario where you have public access to one of your
|
|
containers, but what you really want is to restrict access to that to a
|
|
set of IPs based on a whitelist. In this example, we'll create a piece of
|
|
middleware for swift that allows access to a container from only a set of
|
|
IP addresses, as determined by the container's metadata items. Only those
|
|
IP addresses that you explicitly whitelist using the container's metadata
|
|
will be able to access the container.</para>
|
|
|
|
<warning>
|
|
<para>This example is for illustrative purposes only. It should not be
|
|
used as a container IP whitelist solution without further development
|
|
and extensive security testing.<indexterm class="singular">
|
|
<primary>security issues</primary>
|
|
|
|
<secondary>middleware example</secondary>
|
|
</indexterm></para>
|
|
</warning>
|
|
|
|
<para>When you join the screen session that <code>stack.sh</code> starts
|
|
with <code>screen -r stack</code>, you see a screen for each service
|
|
running, which can be a few or several, depending on how many services you
|
|
configured DevStack to run.</para>
|
|
|
|
<para>The asterisk * indicates which screen window you are viewing. This
|
|
example shows we are viewing the key (for keystone) screen window:</para>
|
|
|
|
<screen><computeroutput>0$ shell 1$ key* 2$ horizon 3$ s-proxy 4$ s-object 5$ s-container 6$ s-account</computeroutput></screen>
|
|
|
|
<para>The purpose of the screen windows are as follows:</para>
|
|
|
|
<variablelist>
|
|
<varlistentry>
|
|
<term><code>shell</code></term>
|
|
|
|
<listitem>
|
|
<para>A shell where you can get some work done</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
|
|
<varlistentry>
|
|
<term><code>key*</code></term>
|
|
|
|
<listitem>
|
|
<para>The keystone service</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
|
|
<varlistentry>
|
|
<term><code>horizon</code></term>
|
|
|
|
<listitem>
|
|
<para>The horizon dashboard web application</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
|
|
<varlistentry>
|
|
<term><code>s-{name}</code></term>
|
|
|
|
<listitem>
|
|
<para>The swift services</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
</variablelist>
|
|
|
|
<procedure>
|
|
<title>To create the middleware and plug it in through Paste
|
|
configuration:</title>
|
|
|
|
<para>All of the code for OpenStack lives in <code>/opt/stack</code>. Go
|
|
to the swift directory in the <code>shell</code> screen and edit your
|
|
middleware module.</para>
|
|
|
|
<step>
|
|
<para>Change to the directory where Object Storage is
|
|
installed:</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>cd /opt/stack/swift</userinput></screen>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Create the <literal>ip_whitelist.py</literal> Python source code
|
|
file:</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>vim swift/common/middleware/ip_whitelist.py</userinput></screen>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Copy the code in <xref linkend="ip_whitelist" /> into
|
|
<filename>ip_whitelist.py</filename>. The following code is a
|
|
middleware example that restricts access to a container based on IP
|
|
address as explained at the beginning of the section. Middleware
|
|
passes the request on to another application. This example uses the
|
|
swift "swob" library to wrap Web Server Gateway Interface (WSGI)
|
|
requests and responses into objects for swift to interact with. When
|
|
you're done, save and close the file.</para>
|
|
|
|
<example xml:id="ip_whitelist">
|
|
<title>ip_whitelist.py</title>
|
|
|
|
<programlisting language="python"><?db-font-size 75%?># vim: tabstop=4 shiftwidth=4 softtabstop=4
|
|
# Copyright (c) 2014 OpenStack Foundation
|
|
# All Rights Reserved.
|
|
#
|
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
# not use this file except in compliance with the License. You may obtain
|
|
# a copy of the License at
|
|
#
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
#
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
# License for the specific language governing permissions and limitations
|
|
# under the License.
|
|
|
|
import socket
|
|
|
|
from swift.common.utils import get_logger
|
|
from swift.proxy.controllers.base import get_container_info
|
|
from swift.common.swob import Request, Response
|
|
|
|
class IPWhitelistMiddleware(object):
|
|
"""
|
|
IP Whitelist Middleware
|
|
|
|
Middleware that allows access to a container from only a set of IP
|
|
addresses as determined by the container's metadata items that start
|
|
with the prefix 'allow'. E.G. allow-dev=192.168.0.20
|
|
"""
|
|
|
|
def __init__(self, app, conf, logger=None):
|
|
self.app = app
|
|
|
|
if logger:
|
|
self.logger = logger
|
|
else:
|
|
self.logger = get_logger(conf, log_route='ip_whitelist')
|
|
|
|
self.deny_message = conf.get('deny_message', "IP Denied")
|
|
self.local_ip = socket.gethostbyname(socket.gethostname())
|
|
|
|
def __call__(self, env, start_response):
|
|
"""
|
|
WSGI entry point.
|
|
Wraps env in swob.Request object and passes it down.
|
|
|
|
:param env: WSGI environment dictionary
|
|
:param start_response: WSGI callable
|
|
"""
|
|
req = Request(env)
|
|
|
|
try:
|
|
version, account, container, obj = req.split_path(1, 4, True)
|
|
except ValueError:
|
|
return self.app(env, start_response)
|
|
|
|
container_info = get_container_info(
|
|
req.environ, self.app, swift_source='IPWhitelistMiddleware')
|
|
|
|
remote_ip = env['REMOTE_ADDR']
|
|
self.logger.debug("Remote IP: %(remote_ip)s",
|
|
{'remote_ip': remote_ip})
|
|
|
|
meta = container_info['meta']
|
|
allow = {k:v for k,v in meta.iteritems() if k.startswith('allow')}
|
|
allow_ips = set(allow.values())
|
|
allow_ips.add(self.local_ip)
|
|
self.logger.debug("Allow IPs: %(allow_ips)s",
|
|
{'allow_ips': allow_ips})
|
|
|
|
if remote_ip in allow_ips:
|
|
return self.app(env, start_response)
|
|
else:
|
|
self.logger.debug(
|
|
"IP %(remote_ip)s denied access to Account=%(account)s "
|
|
"Container=%(container)s. Not in %(allow_ips)s", locals())
|
|
return Response(
|
|
status=403,
|
|
body=self.deny_message,
|
|
request=req)(env, start_response)
|
|
|
|
|
|
def filter_factory(global_conf, **local_conf):
|
|
"""
|
|
paste.deploy app factory for creating WSGI proxy apps.
|
|
"""
|
|
conf = global_conf.copy()
|
|
conf.update(local_conf)
|
|
|
|
def ip_whitelist(app):
|
|
return IPWhitelistMiddleware(app, conf)
|
|
return ip_whitelist</programlisting>
|
|
</example>
|
|
|
|
<para>There is a lot of useful information in <code>env</code> and
|
|
<code>conf</code> that you can use to decide what to do with the
|
|
request. To find out more about what properties are available, you can
|
|
insert the following log statement into the <code>__init__</code>
|
|
method:</para>
|
|
|
|
<programlisting language="python"><?db-font-size 75%?>self.logger.debug("conf = %(conf)s", locals())</programlisting>
|
|
|
|
<para>and the following log statement into the <code>__call__</code>
|
|
method:</para>
|
|
|
|
<programlisting language="python"><?db-font-size 75%?>self.logger.debug("env = %(env)s", locals())</programlisting>
|
|
</step>
|
|
|
|
<step>
|
|
<para>To plug this middleware into the swift Paste pipeline, you edit
|
|
one configuration file,
|
|
<filename>/etc/swift/proxy-server.conf</filename>:</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>vim /etc/swift/proxy-server.conf</userinput></screen>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Find the <code>[filter:ratelimit]</code> section in
|
|
<filename>/etc/swift/proxy-server.conf</filename>, and copy in the
|
|
following configuration section after it:</para>
|
|
|
|
<programlisting language="ini"><?db-font-size 75%?>[filter:ip_whitelist]
|
|
paste.filter_factory = swift.common.middleware.ip_whitelist:filter_factory
|
|
# You can override the default log routing for this filter here:
|
|
# set log_name = ratelimit
|
|
# set log_facility = LOG_LOCAL0
|
|
# set log_level = INFO
|
|
# set log_headers = False
|
|
# set log_address = /dev/log
|
|
deny_message = You shall not pass!</programlisting>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Find the <code>[pipeline:main]</code> section in
|
|
<filename>/etc/swift/proxy-server.conf</filename>, and add
|
|
<code>ip_whitelist</code> after ratelimit to the list like so. When
|
|
you're done, save and close the file:</para>
|
|
|
|
<programlisting language="ini"><?db-font-size 75%?>[pipeline:main]
|
|
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache bulk tempurl ratelimit ip_whitelist ...</programlisting>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Restart the <literal>swift proxy</literal> service to make swift
|
|
use your middleware. Start by switching to the
|
|
<literal>swift-proxy</literal> screen:</para>
|
|
|
|
<substeps>
|
|
<step>
|
|
<para>Press <keycombo>
|
|
<keycap>Ctrl</keycap>
|
|
|
|
<keycap>A</keycap>
|
|
</keycombo> followed by <keycap>3</keycap>.</para>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Press <keycombo>
|
|
<keycap>Ctrl</keycap>
|
|
|
|
<keycap>C</keycap>
|
|
</keycombo> to kill the service.</para>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Press Up Arrow to bring up the last command.</para>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Press Enter to run it.</para>
|
|
</step>
|
|
</substeps>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Test your middleware with the <code>swift</code> CLI. Start
|
|
by switching to the shell screen and finish by switching back to the
|
|
<code>swift-proxy</code> screen to check the log output:</para>
|
|
|
|
<substeps>
|
|
<step>
|
|
<para>Press <keycombo>
|
|
<keycap>Ctrl</keycap>
|
|
|
|
<keycap>A</keycap>
|
|
</keycombo> followed by 0.</para>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Make sure you're in the <literal>devstack</literal>
|
|
directory:</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>cd /root/devstack</userinput></screen>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Source openrc to set up your environment variables for the
|
|
CLI:</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>source openrc</userinput></screen>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Create a container called
|
|
<literal>middleware-test</literal>:</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>swift post middleware-test</userinput></screen>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Press <keycombo>
|
|
<keycap>Ctrl</keycap>
|
|
|
|
<keycap>A</keycap>
|
|
</keycombo> followed by <keycap>3</keycap> to check the log
|
|
output.</para>
|
|
</step>
|
|
</substeps>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Among the log statements you'll see the lines:</para>
|
|
|
|
<screen><computeroutput>proxy-server Remote IP: my.instance.ip.address (txn: ...)
|
|
proxy-server Allow IPs: set(['my.instance.ip.address']) (txn: ...)</computeroutput></screen>
|
|
|
|
<para>These two statements are produced by our middleware and show
|
|
that the request was sent from our DevStack instance and was
|
|
allowed.</para>
|
|
</step>
|
|
|
|
<step xml:id="test_middleware_step">
|
|
<para>Test the middleware from outside DevStack on a remote machine
|
|
that has access to your DevStack instance:</para>
|
|
|
|
<substeps>
|
|
<step>
|
|
<para>Install the <code>keystone</code> and <code>swift</code>
|
|
clients on your local machine:</para>
|
|
|
|
<screen><prompt>#</prompt> <userinput>pip install python-keystoneclient python-swiftclient</userinput></screen>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Attempt to list the objects in the
|
|
<literal>middleware-test</literal> container:</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>swift --os-auth-url=http://my.instance.ip.address:5000/v2.0/ \
|
|
--os-region-name=RegionOne --os-username=demo:demo \
|
|
--os-password=devstack list middleware-test</userinput>
|
|
<computeroutput>Container GET failed: http://my.instance.ip.address:8080/v1/AUTH_.../
|
|
middleware-test?format=json 403 Forbidden   You shall not pass!</computeroutput></screen>
|
|
</step>
|
|
</substeps>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Press <keycombo>
|
|
<keycap>Ctrl</keycap>
|
|
|
|
<keycap>A</keycap>
|
|
</keycombo> followed by <keycap>3</keycap> to check the log output.
|
|
Look at the swift log statements again, and among the log statements,
|
|
you'll see the lines:</para>
|
|
|
|
<screen><computeroutput>proxy-server Authorizing from an overriding middleware (i.e: tempurl) (txn: ...)
|
|
proxy-server ... IPWhitelistMiddleware
|
|
proxy-server Remote IP: my.local.ip.address (txn: ...)
|
|
proxy-server Allow IPs: set(['my.instance.ip.address']) (txn: ...)
|
|
proxy-server IP my.local.ip.address denied access to Account=AUTH_... \
|
|
Container=None. Not in set(['my.instance.ip.address']) (txn: ...)</computeroutput></screen>
|
|
|
|
<para>Here we can see that the request was denied because the remote
|
|
IP address wasn't in the set of allowed IPs.</para>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Back in your DevStack instance on the shell screen, add some
|
|
metadata to your container to allow the request from the remote
|
|
machine:</para>
|
|
|
|
<substeps>
|
|
<step>
|
|
<para>Press <keycombo>
|
|
<keycap>Ctrl</keycap>
|
|
|
|
<keycap>A</keycap>
|
|
</keycombo> followed by <keycap>0</keycap>.</para>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Add metadata to the container to allow the IP:</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>swift post --meta allow-dev:my.local.ip.address middleware-test</userinput></screen>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Now try the command from Step 10 again and it succeeds.
|
|
There are no objects in the container, so there is nothing to
|
|
list; however, there is also no error to report.</para>
|
|
</step>
|
|
</substeps>
|
|
</step>
|
|
</procedure>
|
|
|
|
<warning>
|
|
<para>Functional testing like this is not a replacement for proper unit
|
|
and integration testing, but it serves to get you started.<indexterm
|
|
class="singular">
|
|
<primary>testing</primary>
|
|
|
|
<secondary>functional testing</secondary>
|
|
</indexterm><indexterm class="singular">
|
|
<primary>functional testing</primary>
|
|
</indexterm></para>
|
|
</warning>
|
|
|
|
<para>You can follow a similar pattern in other projects that use the
|
|
Python Paste framework. Simply create a middleware module and plug it in
|
|
through configuration. The middleware runs in sequence as part of that
|
|
project's pipeline and can call out to other services as necessary. No
|
|
project core code is touched. Look for a <code>pipeline</code> value in
|
|
the project's <code>conf</code> or <code>ini</code> configuration files in
|
|
<code>/etc/<project></code> to identify projects that use
|
|
Paste.</para>
|
|
|
|
<para>When your middleware is done, we encourage you to open source it and
|
|
let the community know on the OpenStack mailing list. Perhaps others need
|
|
the same functionality. They can use your code, provide feedback, and
|
|
possibly contribute. If enough support exists for it, perhaps you can
|
|
propose that it be added to the official swift <link
|
|
xlink:href="https://github.com/openstack/swift/tree/master/swift/common/middleware">middleware</link>.</para>
|
|
</section>
|
|
|
|
<section xml:id="nova_scheduler_example">
|
|
<title>Customizing the OpenStack Compute (nova) Scheduler</title>
|
|
|
|
<para>Many OpenStack projects allow for customization of specific features
|
|
using a driver architecture. You can write a driver that conforms to a
|
|
particular interface and plug it in through configuration. For example,
|
|
you can easily plug in a new scheduler for Compute. The existing
|
|
schedulers for Compute are feature full and well documented at <link
|
|
xlink:href="http://docs.openstack.org/kilo/config-reference/content/section_compute-scheduler.html">Scheduling</link>. However, depending
|
|
on your user's use cases, the existing schedulers might not meet your
|
|
requirements. You might need to create a new scheduler.<indexterm
|
|
class="singular">
|
|
<primary>customization</primary>
|
|
|
|
<secondary>OpenStack Compute (nova) Scheduler</secondary>
|
|
</indexterm><indexterm class="singular">
|
|
<primary>schedulers</primary>
|
|
|
|
<secondary>customization of</secondary>
|
|
</indexterm><indexterm class="singular">
|
|
<primary>DevStack</primary>
|
|
|
|
<secondary>customizing OpenStack Compute (nova) scheduler</secondary>
|
|
</indexterm></para>
|
|
|
|
<para>To create a scheduler, you must inherit from the class
|
|
<code>nova.scheduler.driver.Scheduler</code>. Of the five methods that you
|
|
can override, you <emphasis>must</emphasis> override the two methods
|
|
marked with an asterisk (*) below:</para>
|
|
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para><code>update_service_capabilities</code></para>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para><code>hosts_up</code></para>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para><code>group_hosts</code></para>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para>* <code>schedule_run_instance</code></para>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para>* <code>select_destinations</code></para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
|
|
<para>To demonstrate customizing OpenStack, we'll create an example of a
|
|
Compute scheduler that randomly places an instance on a subset of hosts,
|
|
depending on the originating IP address of the request and the prefix of
|
|
the hostname. Such an example could be useful when you have a group of
|
|
users on a subnet and you want all of their instances to start within some
|
|
subset of your hosts.</para>
|
|
|
|
<warning>
|
|
<para>This example is for illustrative purposes only. It should not be
|
|
used as a scheduler for Compute without further development and <phrase
|
|
role="keep-together">testing</phrase>.<indexterm class="singular">
|
|
<primary>security issues</primary>
|
|
|
|
<secondary>scheduler example</secondary>
|
|
</indexterm></para>
|
|
</warning>
|
|
|
|
<para>When you join the screen session that <code>stack.sh</code> starts
|
|
with <code>screen -r stack</code>, you are greeted with many screen
|
|
windows:</para>
|
|
|
|
<screen>0$ shell*  1$ key  2$ horizon  ...  9$ n-api  ...  14$ n-sch ...</screen>
|
|
|
|
<variablelist>
|
|
<varlistentry>
|
|
<term><code>shell</code></term>
|
|
|
|
<listitem>
|
|
<para>A shell where you can get some work done</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
|
|
<varlistentry>
|
|
<term><code>key</code></term>
|
|
|
|
<listitem>
|
|
<para>The keystone service</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
|
|
<varlistentry>
|
|
<term><code>horizon</code></term>
|
|
|
|
<listitem>
|
|
<para>The horizon dashboard web application</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
|
|
<varlistentry>
|
|
<term><code>n-{name}</code></term>
|
|
|
|
<listitem>
|
|
<para>The nova services</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
|
|
<varlistentry>
|
|
<term><code>n-sch</code></term>
|
|
|
|
<listitem>
|
|
<para>The nova scheduler service</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
</variablelist>
|
|
|
|
<procedure>
|
|
<title>To create the scheduler and plug it in through
|
|
configuration:</title>
|
|
|
|
<step>
|
|
<para>The code for OpenStack lives in <code>/opt/stack</code>, so go
|
|
to the <literal>nova</literal> directory and edit your scheduler
|
|
module. Change to the directory where <literal>nova</literal> is
|
|
installed:</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>cd /opt/stack/nova</userinput></screen>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Create the <filename>ip_scheduler.py</filename> Python source
|
|
code file:</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>vim nova/scheduler/ip_scheduler.py</userinput></screen>
|
|
</step>
|
|
|
|
<step>
|
|
<para>The code in <xref linkend="ip_scheduler" /> is a driver that
|
|
will schedule servers to hosts based on IP address as explained at the
|
|
beginning of the section. Copy the code into
|
|
<filename>ip_scheduler.py</filename>. When you're done, save and close
|
|
the file.</para>
|
|
|
|
<example xml:id="ip_scheduler">
|
|
<title>ip_scheduler.py</title>
|
|
|
|
<programlisting language="python"><?db-font-size 65%?># vim: tabstop=4 shiftwidth=4 softtabstop=4
|
|
# Copyright (c) 2014 OpenStack Foundation
|
|
# All Rights Reserved.
|
|
#
|
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
# not use this file except in compliance with the License. You may obtain
|
|
# a copy of the License at
|
|
#
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
#
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
# License for the specific language governing permissions and limitations
|
|
# under the License.
|
|
|
|
"""
|
|
IP Scheduler implementation
|
|
"""
|
|
|
|
import random
|
|
|
|
from oslo.config import cfg
|
|
|
|
from nova.compute import rpcapi as compute_rpcapi
|
|
from nova import exception
|
|
from nova.openstack.common import log as logging
|
|
from nova.openstack.common.gettextutils import _
|
|
from nova.scheduler import driver
|
|
|
|
CONF = cfg.CONF
|
|
CONF.import_opt('compute_topic', 'nova.compute.rpcapi')
|
|
LOG = logging.getLogger(__name__)
|
|
|
|
class IPScheduler(driver.Scheduler):
|
|
"""
|
|
Implements Scheduler as a random node selector based on
|
|
IP address and hostname prefix.
|
|
"""
|
|
|
|
def __init__(self, *args, **kwargs):
|
|
super(IPScheduler, self).__init__(*args, **kwargs)
|
|
self.compute_rpcapi = compute_rpcapi.ComputeAPI()
|
|
|
|
def _filter_hosts(self, request_spec, hosts, filter_properties,
|
|
hostname_prefix):
|
|
"""Filter a list of hosts based on hostname prefix."""
|
|
|
|
hosts = [host for host in hosts if host.startswith(hostname_prefix)]
|
|
return hosts
|
|
|
|
def _schedule(self, context, topic, request_spec, filter_properties):
|
|
"""Picks a host that is up at random."""
|
|
|
|
elevated = context.elevated()
|
|
hosts = self.hosts_up(elevated, topic)
|
|
if not hosts:
|
|
msg = _("Is the appropriate service running?")
|
|
raise exception.NoValidHost(reason=msg)
|
|
|
|
remote_ip = context.remote_address
|
|
|
|
if remote_ip.startswith('10.1'):
|
|
hostname_prefix = 'doc'
|
|
elif remote_ip.startswith('10.2'):
|
|
hostname_prefix = 'ops'
|
|
else:
|
|
hostname_prefix = 'dev'
|
|
|
|
hosts = self._filter_hosts(request_spec, hosts, filter_properties,
|
|
hostname_prefix)
|
|
if not hosts:
|
|
msg = _("Could not find another compute")
|
|
raise exception.NoValidHost(reason=msg)
|
|
|
|
host = random.choice(hosts)
|
|
LOG.debug("Request from %(remote_ip)s scheduled to %(host)s" % locals())
|
|
|
|
return host
|
|
|
|
def select_destinations(self, context, request_spec, filter_properties):
|
|
"""Selects random destinations."""
|
|
num_instances = request_spec['num_instances']
|
|
# NOTE(timello): Returns a list of dicts with 'host', 'nodename' and
|
|
# 'limits' as keys for compatibility with filter_scheduler.
|
|
dests = []
|
|
for i in range(num_instances):
|
|
host = self._schedule(context, CONF.compute_topic,
|
|
request_spec, filter_properties)
|
|
host_state = dict(host=host, nodename=None, limits=None)
|
|
dests.append(host_state)
|
|
|
|
if len(dests) < num_instances:
|
|
raise exception.NoValidHost(reason='')
|
|
return dests
|
|
|
|
def schedule_run_instance(self, context, request_spec,
|
|
admin_password, injected_files,
|
|
requested_networks, is_first_time,
|
|
filter_properties, legacy_bdm_in_spec):
|
|
"""Create and run an instance or instances."""
|
|
instance_uuids = request_spec.get('instance_uuids')
|
|
for num, instance_uuid in enumerate(instance_uuids):
|
|
request_spec['instance_properties']['launch_index'] = num
|
|
try:
|
|
host = self._schedule(context, CONF.compute_topic,
|
|
request_spec, filter_properties)
|
|
updated_instance = driver.instance_update_db(context,
|
|
instance_uuid)
|
|
self.compute_rpcapi.run_instance(context,
|
|
instance=updated_instance, host=host,
|
|
requested_networks=requested_networks,
|
|
injected_files=injected_files,
|
|
admin_password=admin_password,
|
|
is_first_time=is_first_time,
|
|
request_spec=request_spec,
|
|
filter_properties=filter_properties,
|
|
legacy_bdm_in_spec=legacy_bdm_in_spec)
|
|
except Exception as ex:
|
|
# NOTE(vish): we don't reraise the exception here to make sure
|
|
# that all instances in the request get set to
|
|
# error properly
|
|
driver.handle_schedule_error(context, ex, instance_uuid,
|
|
request_spec)</programlisting>
|
|
</example>
|
|
|
|
<para>There is a lot of useful information in <code>context</code>,
|
|
<code>request_spec</code>, and <code>filter_properties</code> that you
|
|
can use to decide where to schedule the instance. To find out more
|
|
about what properties are available, you can insert the following log
|
|
statements into the <code>schedule_run_instance</code> method of the
|
|
scheduler above:</para>
|
|
|
|
<programlisting language="python"><?db-font-size 65%?>LOG.debug("context = %(context)s" % {'context': context.__dict__})
|
|
LOG.debug("request_spec = %(request_spec)s" % locals())
|
|
LOG.debug("filter_properties = %(filter_properties)s" % locals())</programlisting>
|
|
</step>
|
|
|
|
<step>
|
|
<para>To plug this scheduler into nova, edit one configuration file,
|
|
<filename>/etc/nova/nova.conf</filename>:</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>vim /etc/nova/nova.conf</userinput></screen>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Find the <code>scheduler_driver</code> config and change it like
|
|
so:</para>
|
|
|
|
<programlisting language="ini"><?db-font-size 65%?>scheduler_driver=nova.scheduler.ip_scheduler.IPScheduler</programlisting>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Restart the nova scheduler service to make nova use your
|
|
scheduler. Start by switching to the <code>n-sch</code> screen:</para>
|
|
|
|
<substeps>
|
|
<step>
|
|
<para>Press <keycombo>
|
|
<keycap>Ctrl</keycap>
|
|
|
|
<keycap>A</keycap>
|
|
</keycombo> followed by <keycap>9</keycap>.</para>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Press <keycombo>
|
|
<keycap>Ctrl</keycap>
|
|
|
|
<keycap>A</keycap>
|
|
</keycombo> followed by <keycap>N</keycap> until you reach the
|
|
<code>n-sch</code> screen.</para>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Press <keycombo>
|
|
<keycap>Ctrl</keycap>
|
|
|
|
<keycap>C</keycap>
|
|
</keycombo> to kill the service.</para>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Press Up Arrow to bring up the last command.</para>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Press Enter to run it.</para>
|
|
</step>
|
|
</substeps>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Test your scheduler with the nova CLI. Start by switching
|
|
to the <code>shell</code> screen and finish by switching back to the
|
|
<code>n-sch</code> screen to check the log output:</para>
|
|
|
|
<substeps>
|
|
<step>
|
|
<para>Press <keycombo>
|
|
<keycap>Ctrl</keycap>
|
|
|
|
<keycap>A</keycap>
|
|
</keycombo> followed by <keycap>0</keycap>.</para>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Make sure you're in the <filename>devstack</filename>
|
|
directory:</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>cd /root/devstack</userinput></screen>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Source <filename>openrc</filename> to set up your
|
|
environment variables for the CLI:</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>source openrc</userinput></screen>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Put the image ID for the only installed image into an
|
|
environment variable:</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>IMAGE_ID=`nova image-list | egrep cirros | egrep -v "kernel|ramdisk" | awk '{print $2}'`</userinput></screen>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Boot a test server:</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>nova boot --flavor 1 --image $IMAGE_ID scheduler-test</userinput></screen>
|
|
</step>
|
|
</substeps>
|
|
</step>
|
|
|
|
<step>
|
|
<para>Switch back to the <code>n-sch</code> screen. Among the log
|
|
statements, you'll see the line:</para>
|
|
|
|
<screen><computeroutput>2014-01-23 19:57:47.262 DEBUG nova.scheduler.ip_scheduler \
|
|
[req-... demo demo] Request from 162.242.221.84 \
|
|
scheduled to devstack-havana \
|
|
_schedule /opt/stack/nova/nova/scheduler/ip_scheduler.py:76</computeroutput></screen>
|
|
</step>
|
|
</procedure>
|
|
|
|
<warning>
|
|
<para>Functional testing like this is not a replacement for proper unit
|
|
and integration testing, but it serves to get you started.</para>
|
|
</warning>
|
|
|
|
<para>A similar pattern can be followed in other projects that use the
|
|
driver architecture. Simply create a module and class that conform to the
|
|
driver interface and plug it in through configuration. Your code runs when
|
|
that feature is used and can call out to other services as necessary. No
|
|
project core code is touched. Look for a "driver" value in the project's
|
|
<filename>.conf</filename> configuration files in
|
|
<code>/etc/<project></code> to identify projects that use a driver
|
|
architecture.</para>
|
|
|
|
<para>When your scheduler is done, we encourage you to open source it and
|
|
let the community know on the OpenStack mailing list. Perhaps others need
|
|
the same functionality. They can use your code, provide feedback, and
|
|
possibly contribute. If enough support exists for it, perhaps you can
|
|
propose that it be added to the official Compute <link
|
|
xlink:href="https://github.com/openstack/nova/tree/master/nova/scheduler">schedulers</link>.</para>
|
|
</section>
|
|
|
|
<section xml:id="ops_dashboard">
|
|
<title>Customizing the Dashboard (Horizon)</title>
|
|
|
|
<para>The dashboard is based on the Python <link
|
|
xlink:href="https://www.djangoproject.com/">Django</link> web application
|
|
framework. The best guide to customizing it has already been written and
|
|
can be found at <link xlink:href="http://docs.openstack.org/developer/horizon/topics/tutorial.html">Building on
|
|
Horizon</link>.<indexterm class="singular">
|
|
<primary>Django</primary>
|
|
</indexterm><indexterm class="singular">
|
|
<primary>Python</primary>
|
|
</indexterm><indexterm class="singular">
|
|
<primary>dashboard</primary>
|
|
</indexterm><indexterm class="singular">
|
|
<primary>DevStack</primary>
|
|
|
|
<secondary>customizing dashboard</secondary>
|
|
</indexterm><indexterm class="singular">
|
|
<primary>customization</primary>
|
|
|
|
<secondary>dashboard</secondary>
|
|
</indexterm></para>
|
|
</section>
|
|
|
|
<section xml:id="customize_conclusion">
|
|
<title>Conclusion</title>
|
|
|
|
<para>When operating an OpenStack cloud, you may discover that your users
|
|
can be quite demanding. If OpenStack doesn't do what your users need, it
|
|
may be up to you to fulfill those requirements. This chapter provided you
|
|
with some options for customization and gave you the tools you need to get
|
|
started.</para>
|
|
</section>
|
|
</chapter>
|