operations-guide/doc/openstack-ops/ch_arch_network_design.xml
Tom Fifield f0ae6e1741 Address editor comments for Network Design chapter
* Add definitions where needed
* Fixes table

Change-Id: If2a79c12ec2bc2f8c6553c8dcad5ef43e5180d4c
2014-01-30 08:18:45 +01:00

316 lines
16 KiB
XML
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE chapter [
<!-- Some useful entities borrowed from HTML -->
<!ENTITY ndash "&#x2013;">
<!ENTITY mdash "&#x2014;">
<!ENTITY hellip "&#x2026;">
<!ENTITY plusmn "&#xB1;">
]>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="network_design">
<?dbhtml stop-chunking?>
<title>Network Design</title>
<para>OpenStack provides a rich networking environment, and this
chapter details the requirements and options to deliberate
when designing your cloud.</para>
<para>If this is the first time you are deploying a cloud
infrastructure in your organisation, after reading this
section, your first conversations should be with your
networking team. Network usage in a running cloud is vastly
different from traditional network deployments, and has the
potential to be disruptive at both a connectivity and a policy
level.</para>
<para>For example, you must plan the number of IP addresses that
you need for both your guest instances as well as management
infrastructure. Additionally, you must research and discuss
cloud network connectivity through proxy servers and
firewalls.</para>
<section xml:id="mgmt_network">
<title>Management Network</title>
<para>A management network, typically consisting of a separate
switch and separate NICs (Network Interface Cards), is a
recommended option. This
segregation prevents system administration and monitoring
system access from being disrupted by traffic generated by
the guests.</para>
<para>Consider creating other private networks for
communication between internal components of OpenStack,
such as the Message Queue and OpenStack Compute. Using a
Virtual Local Area Network (VLAN)
works well for these scenarios because it provides
a method for creating multiple virtual networks on a
physical network.</para>
</section>
<section xml:id="public_addressing">
<title>Public Addressing Options</title>
<para>There are two main types of IP addresses for guest
virtual machines: Fixed IPs and Floating IPs. Fixed IPs
are assigned to instances on boot, whereas Floating IP
addresses can change their association between instances
by action of the user. Both types of IP addresses can
either be public or private, depending on your use
case.</para>
<para>Fixed IP addresses are required, whereas it is possible
to run OpenStack without Floating IPs. One of the most
common use cases for Floating IPs is to provide public IP
addresses to a private cloud, where there are a limited
number of IP addresses available. Another is for a public
cloud user to have a "static" IP address that can be
reassigned when an instance is upgraded or moved.</para>
<para>Fixed IP addresses can be private for private clouds, or
public for public clouds. When an instance terminates, its
Fixed IP is lost. It is worth noting that newer users of
cloud computing may find their ephemeral nature
frustrating.</para>
</section>
<section xml:id="ip_address_planning">
<title>IP Address Planning</title>
<para>An OpenStack installation can potentially have many
subnets (ranges of IP addresses), and different types of
services in each. An IP
address plan can assist with a shared understanding of
network partition purposes and scalability. Control
services can have public and private IP addresses, and as
noted above there are a couple of options for instance's
public addresses.</para>
<para>An IP address plan might be broken down into the
following sections:</para>
<informaltable rules="all">
<tbody>
<tr>
<td><para><emphasis role="bold">subnet router</emphasis></para></td>
<td><para>Packets leaving the subnet go via this
address, which could be a dedicated router
or a nova-network service.</para></td>
</tr>
<tr>
<td><para><emphasis role="bold">control services public
interfaces</emphasis></para></td>
<td><para>Public access to
<code>swift-proxy</code>,
<code>nova-api</code>,
<code>glance-api</code> and horizon
come to these addresses, which could be on
one side of a load balancer, or pointing
at individual machines.</para></td>
</tr>
<tr>
<td><para><emphasis role="bold">Object Storage cluster internal
communications</emphasis></para></td>
<td><para>Traffic amongst object/account/container
servers and between these and the proxy
server's internal interface uses this
private network.</para></td>
</tr>
<tr>
<td><para><emphasis role="bold">compute and storage
communications</emphasis></para></td>
<td><para>If ephemeral or block storage is
external to the compute node, this network
is used.</para></td>
</tr>
<tr>
<td><para><emphasis role="bold">out-of-band remote
management</emphasis></para></td>
<td><para>If a dedicated remote access controller
chip is included in servers, often these
are on a separate network.</para></td>
</tr>
<tr>
<td><para><emphasis role="bold">in-band remote management</emphasis></para></td>
<td><para>Often, an extra (such as, 1 GB)
interface on compute or storage nodes is
used for system administrators or
monitoring tools to access the host
instead of going through the public
interface.</para></td>
</tr>
<tr>
<td><para><emphasis role="bold">spare space for future
growth</emphasis></para></td>
<td><para>Adding more public-facing control
services, or guest instance IPs should
always be part of your plan.</para></td>
</tr>
</tbody>
</informaltable>
<para>For example, take a deployment which has both OpenStack
Compute and Object Storage, with private ranges
172.22.42.0/24 and 172.22.87.0/26 available. One way to
segregate the space might be:</para>
<programlisting><?db-font-size 55%?>172.22.42.0/24
172.22.42.1 - 172.22.42.3 - subnet routers
172.22.42.4 - 172.22.42.20 - spare for networks
172.22.42.21 - 172.22.42.104 - Compute node remote access controllers (inc spare)
172.22.42.105 - 172.22.42.188 - Compute node management interfaces (inc spare)
172.22.42.189 - 172.22.42.208 - Swift proxy remote access controllers (inc spare)
172.22.42.209 - 172.22.42.228 - Swift proxy management interfaces (inc spare)
172.22.42.229 - 172.22.42.252 - Swift storage servers remote access controllers (inc spare)
172.22.42.253 - 172.22.42.254 - spare
172.22.87.0/26:
172.22.87.1 - 172.22.87.3 - subnet routers
172.22.87.4 - 172.22.87.24 - Swift proxy server internal interfaces (inc spare)
172.22.87.25 - 172.22.87.63 - Swift object server internal interfaces (inc spare)</programlisting>
<para>A similar approach can be taken with public IP
addresses, taking note that large, flat ranges are
preferred for use with guest instance IPs. Take into
account that for some OpenStack networking options, a
public IP address in the range of a guest instance public
IP address is assigned to the nova-compute host.</para>
</section>
<?hard-pagebreak?>
<section xml:id="network_topology">
<title>Network Topology</title>
<para>OpenStack Compute provides several network managers,
each with their own strengths and weaknesses. The
selection of a network manager changes your network
topology, so the choice should be made carefully.</para>
<informaltable rules="all">
<thead>
<tr valign="top">
<th>Type</th>
<th>Strengths</th>
<th>Weaknesses</th>
</tr>
</thead>
<tbody>
<tr valign="top">
<td><para>Flat</para></td>
<td><para>Extremely simple.</para><para>No DHCP
broadcasts.</para></td>
<td><para>Requires file injection into the
instance.</para><para>Limited to certain
distributions of Linux.</para><para>
Difficult to configure and is not
recommended.</para></td>
</tr>
<tr valign="top">
<td><para>FlatDHCP</para></td>
<td><para>Relatively simple to setup.</para><para>
Standard networking.</para><para>Works
with all operating systems.</para></td>
<td><para>Requires its own DHCP broadcast
domain.</para></td>
</tr>
<tr valign="top">
<td><para>VlanManager</para></td>
<td><para>Each tenant is isolated to their own
VLANs.</para></td>
<td><para>More complex to set up.</para><para>
Requires its own DHCP broadcast
domain.</para><para>Requires many VLANs
to be trunked onto a single
port.</para><para>Standard VLAN number
limitation.</para><para>Switches must
support 802.1q VLAN tagging.</para></td>
</tr>
<tr valign="top">
<td><para>FlatDHCP Multi-host HA</para></td>
<td><para>Networking failure is isolated to the
VMs running on the hypervisor
affected.</para><para>DHCP traffic can be
isolated within an individual
host.</para><para>Network traffic is
distributed to the compute
nodes.</para></td>
<td><para>More complex to set up.</para><para>By
default, compute nodes need public IP
addresses.</para><para>Options must be
carefully configured for live migration to
work with networking.</para></td>
</tr>
</tbody>
</informaltable>
<section xml:id="vlans">
<title>VLANs</title>
<para>VLAN configuration can be as simple or as
complicated as desired. The use of VLANs has the
benefit of allowing each project its own subnet and
broadcast segregation from other projects. To allow
OpenStack to efficiently use VLANs, you must allocate
a VLAN range (one for each project) and turn each
compute node switch port into a trunk port.</para>
<para>For example, if you estimate that your cloud must
support a max of 100 projects, pick a free VLAN range
that your network infrastructure is currently not
using (such as, VLAN 200 - 299). You must configure
OpenStack with this range as well as configure your
switch ports to allow VLAN traffic from that
range.</para>
</section>
<?hard-pagebreak?>
<section xml:id="multi_nic">
<title>Multi-NIC</title>
<para>OpenStack Compute has the ability to assign multiple
NICs to instances on a per-project basis. This is
generally an advanced feature and not an everyday
request. This can easily be done on a per-request
basis, though. However, be aware that a second NIC
uses up an entire subnet or VLAN. This decrements your
total number of supported projects by one.</para>
</section>
<section xml:id="multi_host_single_host_networks">
<title>Multi-host and Single-host Networking</title>
<para>The nova-network service has the ability to operate
in a multi-host or single-host mode. Multi-host is
when each compute node runs a copy of nova-network and
the instances on that compute node use the compute
node as a gateway to the Internet. The compute nodes
also host the Floating IPs and Security Groups for
instances on that node. Single-host is when a central
server, for example, the cloud controller, runs the
<code>nova-network</code> service. All compute
nodes forward traffic from the instances to the cloud
controller. The cloud controller then forwards traffic
to the Internet. The cloud controller hosts the
Floating IPs and Security Groups for all instances on
all compute nodes in the cloud.</para>
<para>There are benefits to both modes. Single-node has
the downside of a single point of failure. If the
cloud controller is not available, instances cannot
communicate on the network. This is not true with
multi-host, but multi-host requires that each compute
node has a public IP address to communicate on the
Internet. If you are not able to obtain a significant
block of public IP addresses, multi-host might not be
an option.</para>
</section>
</section>
<section xml:id="services_for_networking">
<title>Services for Networking</title>
<para>OpenStack, like any network application, has a number of
the standard considerations to apply, such as DNS and
NTP.</para>
<section xml:id="ntp">
<title>NTP</title>
<para>Time synchronisation is a critical element to ensure
continued operation of OpenStack components. Correct
time is necessary to avoid errors in instance
scheduling, replication of objects in the object
store, and even matching log timestamps for
debugging.</para>
<para>All servers running OpenStack components should be
able to access an appropriate NTP server. You may
decide to set one up locally, or use the public pools
available from http://www.pool.ntp.org/</para>
</section>
<section xml:id="dns">
<title>DNS</title>
<para>OpenStack does not currently provide DNS
services, aside from the dnsmasq daemon which
resides on <code>nova-network</code> hosts. You
could consider providing a dynamic DNS service to
allow instances to update a DNS entry with new IP
addresses. You can also consider making a generic
forward and reverse DNS mapping for instance's IP
addresses, such as
vm-203-0-113-123.example.com.</para>
</section>
</section>
</chapter>