operations-guide/doc/openstack-ops/ch_arch_network_design.xml

537 lines
22 KiB
XML
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<?xml version="1.0" encoding="UTF-8"?>
<chapter version="5.0" xml:id="network_design"
xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:ns5="http://www.w3.org/2000/svg"
xmlns:ns4="http://www.w3.org/1998/Math/MathML"
xmlns:ns3="http://www.w3.org/1999/xhtml"
xmlns:ns="http://docbook.org/ns/docbook">
<?dbhtml stop-chunking?>
<title>Network Design</title>
<para>OpenStack provides a rich networking environment, and this chapter
details the requirements and options to deliberate when designing your
cloud.<indexterm class="singular">
<primary>network design</primary>
<secondary>first steps</secondary>
</indexterm><indexterm class="singular">
<primary>design considerations</primary>
<secondary>network design</secondary>
</indexterm></para>
<warning>
<para>If this is the first time you are deploying a cloud infrastructure
in your organization, after reading this section, your first conversations
should be with your networking team. Network usage in a running cloud is
vastly different from traditional network deployments and has the
potential to be disruptive at both a connectivity and a policy
level.<indexterm class="singular">
<primary>cloud computing</primary>
<secondary>vs. traditional deployments</secondary>
</indexterm></para>
</warning>
<para>For example, you must plan the number of IP addresses that you need
for both your guest instances as well as management infrastructure.
Additionally, you must research and discuss cloud network connectivity
through proxy servers and firewalls.</para>
<para>In this chapter, we'll give some examples of network implementations
to consider and provide information about some of the network layouts that
OpenStack uses. Finally, we have some brief notes on the networking services
that are essential for stable operation.</para>
<section xml:id="mgmt_network">
<title>Management Network</title>
<para>A <glossterm>management network</glossterm> (a separate network for
use by your cloud operators) typically consists of a separate switch and
separate NICs (network interface cards), and is a recommended option. This
segregation prevents system administration and the monitoring of system
access from being disrupted by traffic generated by guests.<indexterm
class="singular">
<primary>NICs (network interface cards)</primary>
</indexterm><indexterm class="singular">
<primary>management network</primary>
</indexterm><indexterm class="singular">
<primary>network design</primary>
<secondary>management network</secondary>
</indexterm></para>
<para>Consider creating other private networks for communication between
internal components of OpenStack, such as the message queue and OpenStack
Compute. Using a virtual local area network (VLAN) works well for these
scenarios because it provides a method for creating multiple virtual
networks on a physical network.</para>
</section>
<section xml:id="public_addressing">
<title>Public Addressing Options</title>
<para>There are two main types of IP addresses for guest virtual machines:
fixed IPs and floating IPs. Fixed IPs are assigned to instances on boot,
whereas floating IP addresses can change their association between
instances by action of the user. Both types of IP addresses can be either
public or private, depending on your use case.<indexterm class="singular">
<primary>IP addresses</primary>
<secondary>public addressing options</secondary>
</indexterm><indexterm class="singular">
<primary>network design</primary>
<secondary>public addressing options</secondary>
</indexterm></para>
<para>Fixed IP addresses are required, whereas it is possible to run
OpenStack without floating IPs. One of the most common use cases for
floating IPs is to provide public IP addresses to a private cloud, where
there are a limited number of IP addresses available. Another is for a
public cloud user to have a "static" IP address that can be reassigned
when an instance is upgraded or moved.<indexterm class="singular">
<primary>IP addresses</primary>
<secondary>static</secondary>
</indexterm><indexterm class="singular">
<primary>static IP addresses</primary>
</indexterm></para>
<para>Fixed IP addresses can be private for private clouds, or public for
public clouds. When an instance terminates, its fixed IP is lost. It is
worth noting that newer users of cloud computing may find their ephemeral
nature frustrating.<indexterm class="singular">
<primary>IP addresses</primary>
<secondary>fixed</secondary>
</indexterm><indexterm class="singular">
<primary>fixed IP addresses</primary>
</indexterm></para>
</section>
<section xml:id="ip_address_planning">
<title>IP Address Planning</title>
<para>An OpenStack installation can potentially have many subnets (ranges
of IP addresses) and different types of services in each. An IP address
plan can assist with a shared understanding of network partition purposes
and scalability. Control services can have public and private IP
addresses, and as noted above, there are a couple of options for an
instance's public addresses.<indexterm class="singular">
<primary>IP addresses</primary>
<secondary>address planning</secondary>
</indexterm><indexterm class="singular">
<primary>network design</primary>
<secondary>IP address planning</secondary>
</indexterm></para>
<para>An IP address plan might be broken down into the following
sections:<indexterm class="singular">
<primary>IP addresses</primary>
<secondary>sections of</secondary>
</indexterm></para>
<variablelist>
<varlistentry>
<term>Subnet router</term>
<listitem>
<para>Packets leaving the subnet go via this address, which could be
a dedicated router or a <literal>nova-network</literal>
service.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Control services public interfaces</term>
<listitem>
<para>Public access to <code>swift-proxy</code>,
<code>nova-api</code>, <code>glance-api</code>, and horizon come to
these addresses, which could be on one side of a load balancer or
pointing at individual machines.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Object Storage cluster internal communications</term>
<listitem>
<para>Traffic among object/account/container servers and between
these and the proxy server's internal interface uses this private
network.<indexterm class="singular">
<primary>containers</primary>
<secondary>container servers</secondary>
</indexterm><indexterm class="singular">
<primary>objects</primary>
<secondary>object servers</secondary>
</indexterm><indexterm class="singular">
<primary>account server</primary>
</indexterm></para>
</listitem>
</varlistentry>
<varlistentry>
<term>Compute and storage communications</term>
<listitem>
<para>If ephemeral or block storage is external to the compute node,
this network is used.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Out-of-band remote management</term>
<listitem>
<para>If a dedicated remote access controller chip is included in
servers, often these are on a separate network.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>In-band remote management</term>
<listitem>
<para>Often, an extra (such as 1 GB) interface on compute or storage
nodes is used for system administrators or monitoring tools to
access the host instead of going through the public
interface.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Spare space for future growth</term>
<listitem>
<para>Adding more public-facing control services or guest instance
IPs should always be part of your plan.</para>
</listitem>
</varlistentry>
</variablelist>
<para>For example, take a deployment that has both OpenStack Compute and
Object Storage, with private ranges 172.22.42.0/24 and 172.22.87.0/26
available. One way to segregate the space might be as follows:</para>
<programlisting><?db-font-size 55%?>172.22.42.0/24:
172.22.42.1 - 172.22.42.3 - subnet routers
172.22.42.4 - 172.22.42.20 - spare for networks
172.22.42.21 - 172.22.42.104 - Compute node remote access controllers
(inc spare)
172.22.42.105 - 172.22.42.188 - Compute node management interfaces (inc spare)
172.22.42.189 - 172.22.42.208 - Swift proxy remote access controllers
(inc spare)
172.22.42.209 - 172.22.42.228 - Swift proxy management interfaces (inc spare)
172.22.42.229 - 172.22.42.252 - Swift storage servers remote access controllers
(inc spare)
172.22.42.253 - 172.22.42.254 - spare
172.22.87.0/26:
172.22.87.1 - 172.22.87.3 - subnet routers
172.22.87.4 - 172.22.87.24 - Swift proxy server internal interfaces
(inc spare)
172.22.87.25 - 172.22.87.63 - Swift object server internal interfaces
(inc spare)</programlisting>
<para>A similar approach can be taken with public IP addresses, taking
note that large, flat ranges are preferred for use with guest instance
IPs. Take into account that for some OpenStack networking options, a
public IP address in the range of a guest instance public IP address is
assigned to the <literal>nova-compute</literal> host.</para>
</section>
<section xml:id="network_topology">
<title>Network Topology</title>
<para>OpenStack Compute with <literal>nova-network</literal> provides
predefined network deployment models, each with its own strengths and
weaknesses. The selection of a network manager changes your network
topology, so the choice should be made carefully. You also have a choice
between the tried-and-true legacy <literal>nova-network</literal> settings
or the <phrase role="keep-together">neutron</phrase> project for OpenStack
Networking. Both offer networking for launched instances with different
implementations and requirements.<indexterm class="singular">
<primary>networks</primary>
<secondary>deployment options</secondary>
</indexterm><indexterm class="singular">
<primary>networks</primary>
<secondary>network managers</secondary>
</indexterm><indexterm class="singular">
<primary>network design</primary>
<secondary>network topology</secondary>
<tertiary>deployment options</tertiary>
</indexterm></para>
<para>For OpenStack Networking with the neutron project, typical
configurations are documented with the idea that any setup you can
configure with real hardware you can re-create with a software-defined
equivalent. Each tenant can contain typical network elements such as
routers, and services such as DHCP.</para>
<para><xref linkend="network_deployment_options" /> describes the
networking deployment options for both legacy
<literal>nova-network</literal> options and an equivalent neutron
configuration.<indexterm class="singular">
<primary>provisioning/deployment</primary>
<secondary>network deployment options</secondary>
</indexterm></para>
<table rules="all" width="500" xml:id="network_deployment_options">
<caption>Networking deployment options</caption>
<col width="19%" />
<col width="23%" />
<col width="24%" />
<col width="34%" />
<thead>
<tr valign="top">
<th>Network deployment model</th>
<th>Strengths</th>
<th>Weaknesses</th>
<th>Neutron equivalent</th>
</tr>
</thead>
<tbody>
<tr valign="top">
<td><para>Flat</para></td>
<td><para>Extremely simple topology.</para> <para>No DHCP
overhead.</para></td>
<td><para>Requires file injection into the instance to configure
network interfaces.</para></td>
<td>Configure a single bridge as the integration bridge (br-int) and
connect it to a physical network interface with the Modular Layer 2
(ML2) plug-in, which uses Open vSwitch by default.</td>
</tr>
<tr valign="top">
<td><para>FlatDHCP</para></td>
<td><para>Relatively simple to deploy.</para> <para>Standard
networking.</para> <para>Works with all guest operating
systems.</para></td>
<td><para>Requires its own DHCP broadcast domain.</para></td>
<td>Configure DHCP agents and routing agents. Network Address
Translation (NAT) performed outside of compute nodes, typically on
one or more network nodes.</td>
</tr>
<tr valign="top">
<td><para>VlanManager</para></td>
<td><para>Each tenant is isolated to its own VLANs.</para></td>
<td><para>More complex to set up.</para> <para>Requires its own DHCP
broadcast domain.</para> <para>Requires many VLANs to be trunked
onto a single port.</para> <para>Standard VLAN number
limitation.</para> <para>Switches must support 802.1q VLAN
tagging.</para></td>
<td><para>Isolated tenant networks implement some form of isolation
of layer 2 traffic between distinct networks. VLAN tagging is key
concept, where traffic is “tagged” with an ordinal identifier for
the VLAN. Isolated network implementations may or may not include
additional services like DHCP, NAT, and routing.</para></td>
</tr>
<tr valign="top">
<td><para>FlatDHCP&#160;Multi-host with high availability
(HA)</para></td>
<td><para>Networking failure is isolated to the VMs running on the
affected hypervisor.</para> <para>DHCP traffic can be isolated
within an individual host.</para> <para>Network traffic is
distributed to the compute nodes.</para></td>
<td><para>More complex to set up.</para> <para>Compute nodes
typically need IP addresses accessible by external networks.</para>
<para>Options must be carefully configured for live migration to
work with networking services.</para></td>
<td><para>Configure neutron with multiple DHCP and layer-3 agents.
Network nodes are not able to failover to each other, so the
controller runs networking services, such as DHCP. Compute nodes run
the ML2 plug-in with support for agents such as Open vSwitch or
Linux Bridge.</para></td>
</tr>
</tbody>
</table>
<para>Both <literal>nova-network</literal> and neutron services provide
similar capabilities, such as VLAN between VMs. You also can provide
multiple NICs on VMs with either service. Further discussion
follows.</para>
<section xml:id="vlans">
<title>VLAN Configuration Within OpenStack VMs</title>
<para>VLAN configuration can be as simple or as complicated as desired.
The use of VLANs has the benefit of allowing each project its own subnet
and broadcast segregation from other projects. To allow OpenStack to
efficiently use VLANs, you must allocate a VLAN range (one for each
project) and turn each compute node switch port into a trunk
port.<indexterm class="singular">
<primary>networks</primary>
<secondary>VLAN</secondary>
</indexterm><indexterm class="singular">
<primary>VLAN network</primary>
</indexterm><indexterm class="singular">
<primary>network design</primary>
<secondary>network topology</secondary>
<tertiary>VLAN with OpenStack VMs</tertiary>
</indexterm></para>
<para>For example, if you estimate that your cloud must support a
maximum of 100 projects, pick a free VLAN range that your network
infrastructure is currently not using (such as VLAN 200299). You must
configure OpenStack with this range and also configure your switch ports
to allow VLAN traffic from that range.</para>
</section>
<section xml:id="multi_nic">
<title>Multi-NIC Provisioning</title>
<para>OpenStack Networking with <literal>neutron</literal> and
OpenStack Compute with nova-network have the ability to assign
multiple NICs to instances. For nova-network this can be done
on a per-request basis, with each additional NIC using up an
entire subnet or VLAN, reducing the total number of supported
projects.<indexterm class="singular">
<primary>MultiNic</primary>
</indexterm><indexterm class="singular">
<primary>network design</primary>
<secondary>network topology</secondary>
<tertiary>multi-NIC provisioning</tertiary>
</indexterm></para>
</section>
<section xml:id="multi_host_single_host_networks">
<title>Multi-Host and Single-Host Networking</title>
<para>The <literal>nova-network</literal> service has the ability to
operate in a multi-host or single-host mode. Multi-host is when each
compute node runs a copy of <literal>nova-network</literal> and the
instances on that compute node use the compute node as a gateway to the
Internet. The compute nodes also host the floating IPs and security
groups for instances on that node. Single-host is when a central
server—for example, the cloud controller—runs the
<code>nova-network</code> service. All compute nodes forward traffic
from the instances to the cloud controller. The cloud controller then
forwards traffic to the Internet. The cloud controller hosts the
floating IPs and security groups for all instances on all compute nodes
in the cloud.<indexterm class="singular">
<primary>single-host networking</primary>
</indexterm><indexterm class="singular">
<primary>networks</primary>
<secondary>multi-host</secondary>
</indexterm><indexterm class="singular">
<primary>multi-host networking</primary>
</indexterm><indexterm class="singular">
<primary>network design</primary>
<secondary>network topology</secondary>
<tertiary>multi- vs. single-host networking</tertiary>
</indexterm></para>
<para>There are benefits to both modes. Single-node has the downside of
a single point of failure. If the cloud controller is not available,
instances cannot communicate on the network. This is not true with
multi-host, but multi-host requires that each compute node has a public
IP address to communicate on the Internet. If you are not able to obtain
a significant block of public IP addresses, multi-host might not be an
option.</para>
</section>
</section>
<section xml:id="services_for_networking">
<title>Services for Networking</title>
<para>OpenStack, like any network application, has a number of standard
considerations to apply, such as NTP and DNS.<indexterm class="singular">
<primary>network design</primary>
<secondary>services for networking</secondary>
</indexterm></para>
<section xml:id="ntp">
<title>NTP</title>
<para>Time synchronization is a critical element to ensure continued
operation of OpenStack components. Correct time is necessary to avoid
errors in instance scheduling, replication of objects in the object
store, and even matching log timestamps for debugging.<indexterm
class="singular">
<primary>networks</primary>
<secondary>Network Time Protocol (NTP)</secondary>
</indexterm></para>
<para>All servers running OpenStack components should be able to access
an appropriate NTP server. You may decide to set up one locally or use
the public pools available from the <link
xlink:href="http://www.pool.ntp.org/en/"> Network Time Protocol
project</link>.</para>
</section>
<section xml:id="dns">
<title>DNS</title>
<para>OpenStack does not currently provide DNS services, aside from the
dnsmasq daemon, which resides on <code>nova-network</code> hosts. You
could consider providing a dynamic DNS service to allow instances to
update a DNS entry with new IP addresses. You can also consider making a
generic forward and reverse DNS mapping for instances' IP addresses,
such as vm-203-0-113-123.example.com.<indexterm class="singular">
<primary>DNS (Domain Name Server, Service or System)</primary>
<secondary>DNS service choices</secondary>
</indexterm></para>
</section>
</section>
<section xml:id="ops-network-conclusion">
<title>Conclusion</title>
<para>Armed with your IP address layout and numbers and knowledge about
the topologies and services you can use, it's now time to prepare the
network for your installation. Be sure to also check out the <link
xlink:href="http://docs.openstack.org/sec/"
xlink:title="OpenStack Security Guide"><emphasis>OpenStack Security
Guide</emphasis></link> for tips on securing your network. We wish you a
good relationship with your networking team!</para>
</section>
</chapter>