Fix whitespace
Fix issues found by test.py --check-niceness --force. Change-Id: I3956292b60572e9c23e3caf9dc16ea6e8ab1ae58
This commit is contained in:
parent
403304de90
commit
7bf0e5ff70
@ -23,7 +23,7 @@
|
||||
automated: Cobbler deployed the OS on the bare metal,
|
||||
bootstrapped it, and Puppet took over from there. I had
|
||||
run the deployment scenario so many times in practice and
|
||||
took for granted that everything was working. </para>
|
||||
took for granted that everything was working.</para>
|
||||
<para>On my last day in Kelowna, I was in a conference call
|
||||
from my hotel. In the background, I was fooling around on
|
||||
the new cloud. I launched an instance and logged in.
|
||||
@ -42,7 +42,7 @@
|
||||
the unfortunate conclusion that this cloud did indeed have
|
||||
a problem. Even worse, my time was up in Kelowna and I had
|
||||
to return back to Calgary.</para>
|
||||
<para> Where do you even begin troubleshooting something like
|
||||
<para>Where do you even begin troubleshooting something like
|
||||
this? An instance just randomly locks when a command is
|
||||
issued. Is it the image? Nope — it happens on all images.
|
||||
Is it the compute node? Nope — all nodes. Is the instance
|
||||
@ -105,7 +105,7 @@
|
||||
the public internet, it should no longer have a VLAN.
|
||||
False. Uh oh. It looked as though the VLAN part of the
|
||||
packet was not being removed.</para>
|
||||
<para>That made no sense. </para>
|
||||
<para>That made no sense.</para>
|
||||
<para>While bouncing this idea around in our heads, I was
|
||||
randomly typing commands on the compute node:
|
||||
<screen><userinput><prompt>$</prompt> ip a</userinput>
|
||||
@ -153,7 +153,7 @@
|
||||
<para>A few nights later, it happened again.</para>
|
||||
<para>We reviewed both sets of logs. The one thing that stood
|
||||
out the most was DHCP. At the time, OpenStack, by default,
|
||||
set DHCP leases for one minute (it's now two minutes).
|
||||
set DHCP leases for one minute (it's now two minutes).
|
||||
This means that every instance
|
||||
contacts the cloud controller (DHCP server) to renew its
|
||||
fixed IP. For some reason, this instance could not renew
|
||||
@ -376,7 +376,7 @@
|
||||
<para>This past Valentine's Day, I received an alert that a
|
||||
compute node was no longer available in the cloud
|
||||
— meaning,
|
||||
<screen><prompt>$</prompt><userinput> nova-manage service list</userinput></screen>
|
||||
<screen><prompt>$</prompt><userinput>nova-manage service list</userinput></screen>
|
||||
showed this particular node with a status of
|
||||
<code>XXX</code>.</para>
|
||||
<para>I logged into the cloud controller and was able to both
|
||||
@ -431,7 +431,7 @@
|
||||
coming from the two compute nodes and immediately shut the
|
||||
ports down to prevent spanning tree loops:
|
||||
<screen><computeroutput>Feb 15 01:40:18 SW-1 Stp: %SPANTREE-4-BLOCK_BPDUGUARD: Received BPDU packet on Port-Channel35 with BPDU guard enabled. Disabling interface. (source mac fa:16:3e:24:e7:22)
|
||||
Feb 15 01:40:18 SW-1 Ebra: %ETH-4-ERRDISABLE: bpduguard error detected on Port-Channel35.
|
||||
Feb 15 01:40:18 SW-1 Ebra: %ETH-4-ERRDISABLE: bpduguard error detected on Port-Channel35.
|
||||
Feb 15 01:40:18 SW-1 Mlag: %MLAG-4-INTF_INACTIVE_LOCAL: Local interface Port-Channel35 is link down. MLAG 35 is inactive.
|
||||
Feb 15 01:40:18 SW-1 Ebra: %LINEPROTO-5-UPDOWN: Line protocol on Interface Port-Channel35 (Server35), changed state to down
|
||||
Feb 15 01:40:19 SW-1 Stp: %SPANTREE-6-INTERFACE_DEL: Interface Port-Channel35 has been removed from instance MST0
|
||||
|
@ -1,5 +1,5 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE book [
|
||||
<!DOCTYPE book [
|
||||
<!-- Some useful entities borrowed from HTML -->
|
||||
<!ENTITY ndash "–">
|
||||
<!ENTITY mdash "—">
|
||||
|
@ -5,8 +5,6 @@
|
||||
<!ENTITY mdash "—">
|
||||
<!ENTITY hellip "…">
|
||||
<!ENTITY plusmn "±">
|
||||
|
||||
|
||||
]>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
@ -18,9 +16,9 @@
|
||||
which allows all services to be distributed widely. However,
|
||||
to simplify this guide we have decided to discuss services of
|
||||
a more central nature using the concept of a single
|
||||
<emphasis>cloud controller</emphasis>. </para>
|
||||
<emphasis>cloud controller</emphasis>.</para>
|
||||
<para>For more details about the overall architecture, see the
|
||||
<xref linkend="example_architecture"/>. </para>
|
||||
<xref linkend="example_architecture"/>.</para>
|
||||
<para>As described in this guide, the cloud controller is a single
|
||||
node that hosts the databases, message queue service,
|
||||
authentication and authorization service, image management
|
||||
@ -29,7 +27,7 @@
|
||||
<para>The cloud controller provides the central management system
|
||||
for multi-node OpenStack deployments. Typically the cloud
|
||||
controller manages authentication and sends messaging to all
|
||||
the systems through a message queue. </para>
|
||||
the systems through a message queue.</para>
|
||||
<para>For our example, the cloud controller has a collection of
|
||||
<code>nova-*</code> components that represent the global
|
||||
state of the cloud, talks to services such as authentication,
|
||||
@ -44,7 +42,7 @@
|
||||
<title>Hardware Considerations</title>
|
||||
<para>A cloud controller's hardware can be the same as a
|
||||
compute node, though you may want to further specify based
|
||||
on the size and type of cloud that you run. </para>
|
||||
on the size and type of cloud that you run.</para>
|
||||
<para>It's also possible to use virtual machines for all or
|
||||
some of the services that the cloud controller manages,
|
||||
such as the message queuing. In this guide, we assume that
|
||||
@ -273,12 +271,12 @@
|
||||
EC2 compatibility APIs, or just the OpenStack APIs. One
|
||||
issue you might encounter when running both APIs is an
|
||||
inconsistent experience when referring to images and
|
||||
instances. </para>
|
||||
instances.</para>
|
||||
<para>For example, the EC2 API refers to instances using IDs
|
||||
that contain hexadecimal whereas the OpenStack API uses
|
||||
names and digits. Similarly, the EC2 API tends to rely on
|
||||
DNS aliases for contacting virtual machines, as opposed to
|
||||
OpenStack which typically lists IP addresses. </para>
|
||||
OpenStack which typically lists IP addresses.</para>
|
||||
<para>If OpenStack is not set up in the right way, it is
|
||||
simple to have scenarios where users are unable to contact
|
||||
their instances due to only having an incorrect DNS alias.
|
||||
@ -320,7 +318,7 @@
|
||||
<emphasis>flavors</emphasis>) into different sized
|
||||
physical nova-compute nodes is a challenging problem -
|
||||
researched generically in Computer Science as a packing
|
||||
problem. </para>
|
||||
problem.</para>
|
||||
<para>You can use various techniques to handle this problem
|
||||
though solving this problem is out of the scope of this
|
||||
book. To support your scheduling choices, OpenStack
|
||||
@ -359,7 +357,7 @@
|
||||
store the images as files.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>S3. Allows you to fetch images from Amazon S3. </para>
|
||||
<para>S3. Allows you to fetch images from Amazon S3.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>HTTP. Allows you to fetch images from a web
|
||||
@ -381,7 +379,6 @@
|
||||
API servers (including their admin endpoints) over the
|
||||
network.</para>
|
||||
</section>
|
||||
|
||||
<section xml:id="authentication">
|
||||
<title>Authentication and Authorization</title>
|
||||
<para>The concepts supporting OpenStack's authentication and
|
||||
@ -389,7 +386,7 @@
|
||||
used systems of a similar nature. Users have credentials
|
||||
they can use to authenticate, and they can be a member of
|
||||
one or more groups (known as projects or tenants
|
||||
interchangeably). </para>
|
||||
interchangeably).</para>
|
||||
<para>For example, a cloud administrator might be able to list
|
||||
all instances in the cloud, whereas a user can only see
|
||||
those in their current group. Resources quotas, such as
|
||||
@ -438,7 +435,7 @@
|
||||
networking where the cloud controller is the network
|
||||
gateway for all instances, then the Cloud Controller must
|
||||
support the total amount of traffic that travels between
|
||||
your cloud and the public Internet. </para>
|
||||
your cloud and the public Internet.</para>
|
||||
<para>We recommend that you use a fast NIC, such as 10 GB. You
|
||||
can also choose to use two 10 GB NICs and bond them
|
||||
together. While you might not be able to get a full bonded
|
||||
|
@ -5,8 +5,6 @@
|
||||
<!ENTITY mdash "—">
|
||||
<!ENTITY hellip "…">
|
||||
<!ENTITY plusmn "±">
|
||||
|
||||
|
||||
]>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
@ -33,7 +31,7 @@
|
||||
depends upon your use case. We recommend you do
|
||||
performance testing with your local workload with both
|
||||
hyper-threading on and off to determine what is more
|
||||
appropriate in your case. </para>
|
||||
appropriate in your case.</para>
|
||||
</section>
|
||||
<?hard-pagebreak?>
|
||||
<section xml:id="hypervisor_choice">
|
||||
@ -62,7 +60,7 @@
|
||||
hypervisor is your current usage or experience. Aside from
|
||||
that, there are practical concerns to do with feature
|
||||
parity, documentation, and the level of community
|
||||
experience. </para>
|
||||
experience.</para>
|
||||
<para>For example, KVM is the most widely adopted hypervisor
|
||||
in the OpenStack community. Besides KVM, more deployments
|
||||
exist running Xen, LXC, VMWare and Hyper-V than the others
|
||||
@ -93,7 +91,7 @@
|
||||
instantiated instance runs. There are three main
|
||||
approaches to providing this temporary-style storage, and
|
||||
it is important to understand the implications of the
|
||||
choice. </para>
|
||||
choice.</para>
|
||||
<para>They are:</para>
|
||||
<itemizedlist role="compact">
|
||||
<listitem>
|
||||
@ -147,7 +145,7 @@
|
||||
long as you don't have any instances currently running
|
||||
on a compute host, you can take it offline or wipe it
|
||||
completely without having any effect on the rest of
|
||||
your cloud. </para>
|
||||
your cloud.</para>
|
||||
<para>However, if you are more restricted in the number of
|
||||
physical hosts you have available for creating your
|
||||
cloud and you want to be able to dedicate as many of
|
||||
@ -198,7 +196,7 @@
|
||||
distributed file system ties the disks from each
|
||||
compute node into a single mount. The main advantage
|
||||
of this option is that it scales to external storage
|
||||
when you require additional storage. </para>
|
||||
when you require additional storage.</para>
|
||||
<para>However, this option has several downsides:</para>
|
||||
<itemizedlist role="compact">
|
||||
<listitem>
|
||||
@ -271,7 +269,7 @@
|
||||
ability to seamlessly move instances from one physical
|
||||
host to another, a necessity for performing upgrades
|
||||
that require reboots of the compute hosts, but only
|
||||
works well with shared storage. </para>
|
||||
works well with shared storage.</para>
|
||||
<para>Live migration can be also done with non-shared storage, using a feature known as
|
||||
<emphasis>KVM live block migration</emphasis>. While an earlier implementation
|
||||
of block-based migration in KVM and QEMU was considered unreliable, there is a
|
||||
@ -283,7 +281,7 @@
|
||||
<title>Choice of File System</title>
|
||||
<para>If you want to support shared storage live
|
||||
migration, you'll need to configure a distributed file
|
||||
system. </para>
|
||||
system.</para>
|
||||
<para>Possible options include:</para>
|
||||
<itemizedlist role="compact">
|
||||
<listitem>
|
||||
@ -330,7 +328,7 @@
|
||||
that the scheduler allocates instances to a physical node
|
||||
as long as the total amount of RAM associated with the
|
||||
instances is less than 1.5 times the amount of RAM
|
||||
available on the physical node. </para>
|
||||
available on the physical node.</para>
|
||||
<para>For example, if a physical node has 48 GB of RAM, the
|
||||
scheduler allocates instances to that node until the sum
|
||||
of the RAM associated with the instances reaches 72 GB
|
||||
@ -344,7 +342,7 @@
|
||||
<para>Logging is detailed more fully in <xref
|
||||
linkend="logging"/>. However it is an important design
|
||||
consideration to take into account before commencing
|
||||
operations of your cloud. </para>
|
||||
operations of your cloud.</para>
|
||||
<para>OpenStack produces a great deal of useful logging
|
||||
information, however, in order for it to be useful for
|
||||
operations purposes you should consider having a central
|
||||
|
@ -343,7 +343,7 @@
|
||||
>OpenStack Install and Deploy Manual - Ubuntu</link>
|
||||
(http://docs.openstack.org/havana/install-guide/install/apt/),
|
||||
which contains a step-by-step guide on how to manually install
|
||||
the OpenStack packages and dependencies on your cloud. </para>
|
||||
the OpenStack packages and dependencies on your cloud.</para>
|
||||
<para>While it is important for an operator to be familiar with
|
||||
the steps involved in deploying OpenStack, we also strongly
|
||||
encourage you to evaluate configuration management tools such
|
||||
|
@ -5,8 +5,6 @@
|
||||
<!ENTITY mdash "—">
|
||||
<!ENTITY hellip "…">
|
||||
<!ENTITY plusmn "±">
|
||||
|
||||
|
||||
]>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
@ -176,10 +174,10 @@
|
||||
<tbody>
|
||||
<tr valign="top">
|
||||
<td><para>Flat</para></td>
|
||||
<td><para>Extremely simple.</para><para> No DHCP
|
||||
<td><para>Extremely simple.</para><para>No DHCP
|
||||
broadcasts.</para></td>
|
||||
<td><para>Requires file injection into the
|
||||
instance.</para><para> Limited to certain
|
||||
instance.</para><para>Limited to certain
|
||||
distributions of Linux.</para><para>
|
||||
Difficult to configure and is not
|
||||
recommended.</para></td>
|
||||
@ -187,7 +185,7 @@
|
||||
<tr valign="top">
|
||||
<td><para>FlatDHCP</para></td>
|
||||
<td><para>Relatively simple to setup.</para><para>
|
||||
Standard networking.</para><para> Works
|
||||
Standard networking.</para><para>Works
|
||||
with all operating systems.</para></td>
|
||||
<td><para>Requires its own DHCP broadcast
|
||||
domain.</para></td>
|
||||
@ -198,24 +196,24 @@
|
||||
VLANs.</para></td>
|
||||
<td><para>More complex to set up.</para><para>
|
||||
Requires its own DHCP broadcast
|
||||
domain.</para><para> Requires many VLANs
|
||||
domain.</para><para>Requires many VLANs
|
||||
to be trunked onto a single
|
||||
port.</para><para> Standard VLAN number
|
||||
limitation.</para><para> Switches must
|
||||
port.</para><para>Standard VLAN number
|
||||
limitation.</para><para>Switches must
|
||||
support 802.1q VLAN tagging.</para></td>
|
||||
</tr>
|
||||
<tr valign="top">
|
||||
<td><para>FlatDHCP Multi-host HA</para></td>
|
||||
<td><para>Networking failure is isolated to the
|
||||
VMs running on the hypervisor
|
||||
affected.</para><para> DHCP traffic can be
|
||||
affected.</para><para>DHCP traffic can be
|
||||
isolated within an individual
|
||||
host.</para><para> Network traffic is
|
||||
host.</para><para>Network traffic is
|
||||
distributed to the compute
|
||||
nodes.</para></td>
|
||||
<td><para>More complex to set up.</para><para> By
|
||||
<td><para>More complex to set up.</para><para>By
|
||||
default, compute nodes need public IP
|
||||
addresses.</para><para> Options must be
|
||||
addresses.</para><para>Options must be
|
||||
carefully configured for live migration to
|
||||
work with networking.</para></td>
|
||||
</tr>
|
||||
|
@ -5,8 +5,6 @@
|
||||
<!ENTITY mdash "—">
|
||||
<!ENTITY hellip "…">
|
||||
<!ENTITY plusmn "±">
|
||||
|
||||
|
||||
]>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
@ -17,7 +15,7 @@
|
||||
<para>A critical part of a cloud's scalability is the amount of
|
||||
effort that it takes to run your cloud. To minimize the
|
||||
operational cost of running your cloud, set up and use an
|
||||
automated deployment and configuration infrastructure. </para>
|
||||
automated deployment and configuration infrastructure.</para>
|
||||
<para>This infrastructure includes systems to automatically
|
||||
install the operating system's initial configuration and later
|
||||
coordinate the configuration of all services automatically and
|
||||
@ -162,7 +160,7 @@
|
||||
management tools ensures that components of the cloud
|
||||
systems are in particular states, in addition to
|
||||
simplifying deployment, and configuration change
|
||||
propagation. </para>
|
||||
propagation.</para>
|
||||
<para>These tools also make it possible to test and roll back
|
||||
changes, as they are fully repeatable. Conveniently, a
|
||||
large body of work has been done by the OpenStack
|
||||
@ -181,7 +179,7 @@
|
||||
to the servers running the cloud, and many don't
|
||||
necessarily enjoy visiting the data center. OpenStack
|
||||
should be entirely remotely configurable, but sometimes
|
||||
not everything goes according to plan. </para>
|
||||
not everything goes according to plan.</para>
|
||||
<para>In this instance, having an out-of-band access into
|
||||
nodes running OpenStack components, is a boon. The IPMI
|
||||
protocol is the de-facto standard here, and acquiring
|
||||
|
@ -5,8 +5,6 @@
|
||||
<!ENTITY mdash "—">
|
||||
<!ENTITY hellip "…">
|
||||
<!ENTITY plusmn "±">
|
||||
|
||||
|
||||
]>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
@ -104,7 +102,7 @@
|
||||
<para>However, you need more than the core count alone to
|
||||
estimate the load that the API services, database servers,
|
||||
and queue servers are likely to encounter. You must also
|
||||
consider the usage patterns of your cloud. </para>
|
||||
consider the usage patterns of your cloud.</para>
|
||||
<para>As a specific example, compare a cloud that supports a
|
||||
managed web hosting platform with one running integration
|
||||
tests for a development project that creates one VM per
|
||||
@ -113,7 +111,6 @@
|
||||
constant heavy load on the cloud controller. You must
|
||||
consider your average VM lifetime, as a larger number
|
||||
generally means less load on the cloud controller.</para>
|
||||
|
||||
<para>Aside from the creation and termination of VMs, you must
|
||||
consider the impact of users accessing the service
|
||||
— particularly on nova-api and its associated database.
|
||||
@ -249,7 +246,7 @@
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>A different API endpoint for
|
||||
every region. </para>
|
||||
every region.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Each region has a full nova
|
||||
@ -335,7 +332,7 @@
|
||||
<section xml:id="availability_zones">
|
||||
<title>Availability Zones and Host Aggregates</title>
|
||||
<para>You can use availability zones, host aggregates, or
|
||||
both to partition a nova deployment. </para>
|
||||
both to partition a nova deployment.</para>
|
||||
<para>Availability zones are implemented through and
|
||||
configured in a similar way to host aggregates.</para>
|
||||
<para>However, you use an availability zone and a host
|
||||
@ -347,7 +344,7 @@
|
||||
and provides a form of physical isolation and
|
||||
redundancy from other availability zones, such
|
||||
as by using separate power supply or network
|
||||
equipment. </para>
|
||||
equipment.</para>
|
||||
<para>You define the availability zone in which a
|
||||
specified Compute host resides locally on each
|
||||
server. An availability zone is commonly used
|
||||
@ -357,7 +354,7 @@
|
||||
power source, you can put servers in those
|
||||
racks in their own availability zone.
|
||||
Availability zones can also help separate
|
||||
different classes of hardware. </para>
|
||||
different classes of hardware.</para>
|
||||
<para>When users provision resources, they can
|
||||
specify from which availability zone they
|
||||
would like their instance to be built. This
|
||||
@ -384,7 +381,7 @@
|
||||
provide information for use with the
|
||||
nova-scheduler. For example, you might use a
|
||||
host aggregate to group a set of hosts that
|
||||
share specific flavors or images. </para>
|
||||
share specific flavors or images.</para>
|
||||
</listitem></itemizedlist>
|
||||
<note><para>Previously, all services had an availability zone. Currently,
|
||||
only the nova-compute service has its own
|
||||
@ -394,11 +391,11 @@
|
||||
appear in their own internal availability zone
|
||||
(CONF.internal_service_availability_zone): <itemizedlist>
|
||||
<listitem>
|
||||
<para>nova host-list (os-hosts) </para>
|
||||
<para>nova host-list (os-hosts)</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>euca-describe-availability-zones
|
||||
verbose </para>
|
||||
verbose</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>nova-manage service list</para>
|
||||
@ -408,7 +405,7 @@
|
||||
(non-verbose).</para>
|
||||
<para>CONF.node_availability_zone has been renamed to
|
||||
CONF.default_availability_zone and is only used by
|
||||
the nova-api and nova-scheduler services. </para>
|
||||
the nova-api and nova-scheduler services.</para>
|
||||
<para>CONF.node_availability_zone still works but is
|
||||
deprecated.</para></note>
|
||||
</section>
|
||||
|
@ -10,8 +10,7 @@
|
||||
<imagedata fileref="figures/Check_mark_23x20_02.svg"
|
||||
format="SVG" scale="60"/>
|
||||
</imageobject>
|
||||
</inlinemediaobject>'>
|
||||
|
||||
</inlinemediaobject>'>
|
||||
]>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
@ -118,13 +117,13 @@ format="SVG" scale="60"/>
|
||||
storage system, as an alternative to storing the
|
||||
images on a file system.</para>
|
||||
</section>
|
||||
|
||||
|
||||
<section xml:id="block_storage">
|
||||
<title>Block Storage</title>
|
||||
<para>Block storage (sometimes referred to as volume
|
||||
storage) exposes a block device to the user. Users
|
||||
interact with block storage by attaching volumes to
|
||||
their running VM instances. </para>
|
||||
their running VM instances.</para>
|
||||
<para>These volumes are persistent: they can be detached
|
||||
from one instance and re-attached to another, and the
|
||||
data remains intact. Block storage is implemented in
|
||||
@ -157,7 +156,7 @@ format="SVG" scale="60"/>
|
||||
solution before, have encountered this form of
|
||||
networked storage. In the Unix world, the most common
|
||||
form of this is NFS. In the Windows world, the most
|
||||
common form is called CIFS (previously, SMB). </para>
|
||||
common form is called CIFS (previously, SMB).</para>
|
||||
<para>OpenStack clouds do not present file-level storage
|
||||
to end users. However, it is important to consider
|
||||
file-level storage for storing instances under
|
||||
@ -210,7 +209,7 @@ format="SVG" scale="60"/>
|
||||
</itemizedlist>
|
||||
<para>To deploy your storage by using entirely commodity
|
||||
hardware, you can use a number of open-source packages, as
|
||||
shown in the following table: </para>
|
||||
shown in the following table:</para>
|
||||
<informaltable rules="all">
|
||||
<thead>
|
||||
<tr>
|
||||
@ -294,7 +293,7 @@ format="SVG" scale="60"/>
|
||||
xlink:title="OpenStack wiki"
|
||||
xlink:href="https://wiki.openstack.org/wiki/CinderSupportMatrix"
|
||||
>OpenStack wiki</link>
|
||||
(https://wiki.openstack.org/wiki/CinderSupportMatrix). </para>
|
||||
(https://wiki.openstack.org/wiki/CinderSupportMatrix).</para>
|
||||
<para>Also, you need to decide whether you want to support
|
||||
object storage in your cloud. The two common use cases for
|
||||
providing object storage in a compute cloud are:</para>
|
||||
@ -312,7 +311,7 @@ format="SVG" scale="60"/>
|
||||
<title>Commodity Storage Back-end Technologies</title>
|
||||
<para>This section provides a high-level overview of the
|
||||
differences among the different commodity storage
|
||||
back-end technologies. </para>
|
||||
back-end technologies.</para>
|
||||
|
||||
<itemizedlist role="compact">
|
||||
<listitem>
|
||||
@ -330,7 +329,7 @@ format="SVG" scale="60"/>
|
||||
Dashboard interface), and better support for
|
||||
multiple data center deployment through
|
||||
support of asynchronous eventual consistency
|
||||
replication. </para>
|
||||
replication.</para>
|
||||
<para>Therefore, if you eventually plan on
|
||||
distributing your storage cluster across
|
||||
multiple data centers, if you need unified
|
||||
@ -347,7 +346,7 @@ format="SVG" scale="60"/>
|
||||
across commodity storage nodes. Ceph was
|
||||
originally developed by one of the founders of
|
||||
DreamHost and is currently used in production
|
||||
there. </para>
|
||||
there.</para>
|
||||
<para>Ceph was designed to expose different types
|
||||
of storage interfaces to the end-user: it
|
||||
supports object storage, block storage, and
|
||||
@ -358,13 +357,13 @@ format="SVG" scale="60"/>
|
||||
back-end for Cinder block storage, as well as
|
||||
back-end storage for Glance images. Ceph
|
||||
supports "thin provisioning", implemented
|
||||
using copy-on-write. </para>
|
||||
using copy-on-write.</para>
|
||||
<para>This can be useful when booting from volume
|
||||
because a new volume can be provisioned very
|
||||
quickly. Ceph also supports keystone-based
|
||||
authentication (as of version 0.56), so it can
|
||||
be a seamless swap in for the default
|
||||
OpenStack Swift implementation. </para>
|
||||
OpenStack Swift implementation.</para>
|
||||
<para>Ceph's advantages are that it gives the
|
||||
administrator more fine-grained control over
|
||||
data distribution and replication strategies,
|
||||
@ -377,7 +376,7 @@ format="SVG" scale="60"/>
|
||||
xlink:href="http://ceph.com/docs/master/faq/"
|
||||
>not yet recommended</link>
|
||||
(http://ceph.com/docs/master/faq/) for use in
|
||||
production deployment by the Ceph project. </para>
|
||||
production deployment by the Ceph project.</para>
|
||||
<para>If you wish to manage your object and block
|
||||
storage within a single system, or if you wish
|
||||
to support fast boot-from-volume, you should
|
||||
@ -391,7 +390,7 @@ format="SVG" scale="60"/>
|
||||
storage into one unified file and object
|
||||
storage solution, which is called Gluster UFO.
|
||||
Gluster UFO uses a customizes version of Swift
|
||||
that uses Gluster as the back-end. </para>
|
||||
that uses Gluster as the back-end.</para>
|
||||
<para>The main advantage of using Gluster UFO over
|
||||
regular Swift is if you also want to support a
|
||||
distributed file system, either to support
|
||||
@ -408,7 +407,7 @@ format="SVG" scale="60"/>
|
||||
physical disks to expose logical volumes to
|
||||
the operating system. The LVM (Logical Volume
|
||||
Manager) back-end implements block storage as
|
||||
LVM logical partitions. </para>
|
||||
LVM logical partitions.</para>
|
||||
<para>On each host that will house block storage,
|
||||
an administrator must initially create a
|
||||
volume group dedicated to Block Storage
|
||||
@ -435,7 +434,7 @@ format="SVG" scale="60"/>
|
||||
manager (LVM) and file system (such as, ext3,
|
||||
ext4, xfs, btrfs). ZFS has a number of
|
||||
advantages over ext4, including improved data
|
||||
integrity checking. </para>
|
||||
integrity checking.</para>
|
||||
<para>The ZFS back-end for OpenStack Block Storage
|
||||
only supports Solaris-based systems such as
|
||||
Illumos. While there is a Linux port of ZFS,
|
||||
@ -446,7 +445,7 @@ format="SVG" scale="60"/>
|
||||
hosts on its own, you need to add a
|
||||
replication solution on top of ZFS if your
|
||||
cloud needs to be able to handle storage node
|
||||
failures. </para>
|
||||
failures.</para>
|
||||
<para>We don't recommend ZFS unless you have
|
||||
previous experience with deploying it, since
|
||||
the ZFS back-end for Block Storage requires a
|
||||
@ -526,7 +525,7 @@ format="SVG" scale="60"/>
|
||||
traffic, which is predominantly "Do you have the
|
||||
object?"/"Yes I have the object!." Of course, if the
|
||||
answer to the aforementioned question is negative or times
|
||||
out, replication of the object begins. </para>
|
||||
out, replication of the object begins.</para>
|
||||
<para>Consider the scenario where an entire server fails, and
|
||||
24 TB of data needs to be transferred "immediately" to
|
||||
remain at three copies - this can put significant load on
|
||||
@ -545,7 +544,7 @@ format="SVG" scale="60"/>
|
||||
<para>The remaining point on bandwidth is the public facing
|
||||
portion. swift-proxy is stateless, which means that you
|
||||
can easily add more and use http load-balancing methods to
|
||||
share bandwidth and availability between them. </para>
|
||||
share bandwidth and availability between them.</para>
|
||||
<para>More proxies means more bandwidth, if your storage can
|
||||
keep up.</para>
|
||||
</section>
|
||||
|
@ -5,8 +5,6 @@
|
||||
<!ENTITY mdash "—">
|
||||
<!ENTITY hellip "…">
|
||||
<!ENTITY plusmn "±">
|
||||
|
||||
|
||||
]>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
@ -71,7 +69,7 @@
|
||||
backup_dir="/var/lib/backups/mysql"
|
||||
filename="${backup_dir}/mysql-`hostname`-`eval date +%Y%m%d`.sql.gz"
|
||||
# Dump the entire MySQL database
|
||||
/usr/bin/mysqldump --opt --all-databases | gzip > $filename
|
||||
/usr/bin/mysqldump --opt --all-databases | gzip > $filename
|
||||
# Delete backups older than 7 days
|
||||
find $backup_dir -ctime +7 -type f -delete</programlisting>
|
||||
<para>This script dumps the entire MySQL database and delete
|
||||
@ -79,7 +77,7 @@ find $backup_dir -ctime +7 -type f -delete</programlisting>
|
||||
</section>
|
||||
<section xml:id="file_system_backups">
|
||||
<title>File System Backups</title>
|
||||
<para>This section discusses which files and directories should be backed up regularly, organized by service. </para>
|
||||
<para>This section discusses which files and directories should be backed up regularly, organized by service.</para>
|
||||
<section xml:id="compute">
|
||||
<title>Compute</title>
|
||||
<para>The <code>/etc/nova</code> directory on both the
|
||||
|
@ -5,8 +5,6 @@
|
||||
<!ENTITY mdash "—">
|
||||
<!ENTITY hellip "…">
|
||||
<!ENTITY plusmn "±">
|
||||
|
||||
|
||||
]>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
@ -319,10 +317,10 @@ HORIZON_BRANCH=stable/folsom</programlisting>
|
||||
what you really want to restrict it to is a set of IPs
|
||||
based on a whitelist.</para>
|
||||
<warning>
|
||||
<para> This example is for illustrative purposes only. It
|
||||
<para>This example is for illustrative purposes only. It
|
||||
should not be used as a container IP whitelist
|
||||
solution without further development and extensive
|
||||
security testing. </para>
|
||||
security testing.</para>
|
||||
</warning>
|
||||
<para>When you join the screen session that
|
||||
<code>stack.sh</code> starts with <code>screen -r
|
||||
@ -644,7 +642,7 @@ proxy-server IP 198.51.100.12 denied access to Account=AUTH_... Container=None.
|
||||
<code>pipeline</code> value in the project's
|
||||
<code>conf</code> or <code>ini</code> configuration
|
||||
files in <code>/etc/<project></code> to identify
|
||||
projects that use Paste. </para>
|
||||
projects that use Paste.</para>
|
||||
<para>When your middleware is done, we encourage you to open
|
||||
source it and let the community know on the OpenStack
|
||||
mailing list. Perhaps others need the same functionality.
|
||||
@ -710,7 +708,7 @@ proxy-server IP 198.51.100.12 denied access to Account=AUTH_... Container=None.
|
||||
<note>
|
||||
<para>This example is for illustrative purposes only. It
|
||||
should not be used as a scheduler for Nova without
|
||||
further development and testing. </para>
|
||||
further development and testing.</para>
|
||||
</note>
|
||||
<para>When you join the screen session that
|
||||
<code>stack.sh</code> starts with <code>screen -r
|
||||
|
@ -5,8 +5,6 @@
|
||||
<!ENTITY mdash "—">
|
||||
<!ENTITY hellip "…">
|
||||
<!ENTITY plusmn "±">
|
||||
|
||||
|
||||
]>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
@ -33,7 +31,7 @@
|
||||
of from the Ubuntu or Fedora packages. The clients are
|
||||
under heavy development and it is very likely at any given
|
||||
time the version of the packages distributed by your
|
||||
operating system vendor are out of date. </para>
|
||||
operating system vendor are out of date.</para>
|
||||
<para>The "pip" utility is used to manage package installation
|
||||
from the PyPI archive and is available in the "python-pip"
|
||||
package in most Linux distributions. Each OpenStack
|
||||
@ -153,25 +151,25 @@
|
||||
a file called <code>openrc.sh</code>, which looks
|
||||
something like this:</para>
|
||||
<programlisting><?db-font-size 60%?>#!/bin/bash
|
||||
|
||||
|
||||
# With the addition of Keystone, to use an openstack cloud you should
|
||||
# authenticate against keystone, which returns a **Token** and **Service
|
||||
# Catalog**. The catalog contains the endpoint for all services the
|
||||
# user/tenant has access to - including nova, glance, keystone, swift.
|
||||
#
|
||||
# *NOTE*: Using the 2.0 *auth api* does not mean that compute api is 2.0.
|
||||
# *NOTE*: Using the 2.0 *auth api* does not mean that compute api is 2.0.
|
||||
# We use the 1.1 *compute api*
|
||||
export OS_AUTH_URL=http://203.0.113.10:5000/v2.0
|
||||
|
||||
|
||||
# With the addition of Keystone we have standardized on the term **tenant**
|
||||
# as the entity that owns the resources.
|
||||
export OS_TENANT_ID=98333aba48e756fa8f629c83a818ad57
|
||||
export OS_TENANT_NAME="test-project"
|
||||
|
||||
|
||||
# In addition to the owning entity (tenant), openstack stores the entity
|
||||
# performing the action as the **user**.
|
||||
export OS_USERNAME=test-user
|
||||
|
||||
|
||||
# With Keystone you pass the keystone password.
|
||||
echo "Please enter your OpenStack Password: "
|
||||
read -s OS_PASSWORD_INPUT
|
||||
@ -202,7 +200,7 @@ export OS_PASSWORD=$OS_PASSWORD_INPUT</programlisting>
|
||||
and pk.pem. The <code>ec2rc.sh</code> is similar to
|
||||
this:</para>
|
||||
<programlisting><?db-font-size 50%?>#!/bin/bash
|
||||
|
||||
|
||||
NOVARC=$(readlink -f "${BASH_SOURCE:-${0}}" 2>/dev/null) ||\
|
||||
NOVARC=$(python -c 'import os,sys; \
|
||||
print os.path.abspath(os.path.realpath(sys.argv[1]))' "${BASH_SOURCE:-${0}}")
|
||||
@ -214,7 +212,7 @@ export EC2_USER_ID=42 # nova does not use user id, but bundling requires it
|
||||
export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem
|
||||
export EC2_CERT=${NOVA_KEY_DIR}/cert.pem
|
||||
export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem
|
||||
export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this
|
||||
export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this
|
||||
|
||||
alias ec2-bundle-image="ec2-bundle-image --cert $EC2_CERT --privatekey \
|
||||
$EC2_PRIVATE_KEY --user 42 --ec2cert $NOVA_CERT"
|
||||
@ -243,7 +241,7 @@ $EC2_SECRET_KEY --url $S3_URL --ec2cert $NOVA_CERT"</programlisting>
|
||||
>bug report</link>
|
||||
(https://bugs.launchpad.net/python-novaclient/+bug/1020238)
|
||||
which has been open, closed as invalid, and reopened
|
||||
through a few cycles. </para>
|
||||
through a few cycles.</para>
|
||||
<para>The issue is that under some conditions the command
|
||||
line tools try to use a Python keyring as a credential
|
||||
cache and, under a subset of those conditions, another
|
||||
@ -277,7 +275,7 @@ $EC2_SECRET_KEY --url $S3_URL --ec2cert $NOVA_CERT"</programlisting>
|
||||
|
||||
<para>The first thing you must do is authenticate with
|
||||
the cloud using your credentials to get an
|
||||
<glossterm>authentication token</glossterm>. </para>
|
||||
<glossterm>authentication token</glossterm>.</para>
|
||||
<para>Your credentials are a combination of username,
|
||||
password, and tenant (project). You can extract
|
||||
these values from the <code>openrc.sh</code>
|
||||
@ -409,7 +407,7 @@ cloud.example.com nova</programlisting>
|
||||
+-----+----------+----------+----------------------------+</programlisting>
|
||||
<para>The output above shows that there are five services
|
||||
configured.</para>
|
||||
<para>To see the endpoint of each service, run: </para>
|
||||
<para>To see the endpoint of each service, run:</para>
|
||||
<programlisting><?db-font-size 60%?><prompt>$</prompt> keystone endpoint-list</programlisting>
|
||||
<programlisting><?db-font-size 55%?>---+------------------------------------------+--
|
||||
| publicurl |
|
||||
@ -570,7 +568,7 @@ None 1.2.3.5 48a415e7-6f07-4d33-ad00-814e60b010ff no
|
||||
<para>Sometimes a user and a group have a one-to-one
|
||||
mapping. This happens for standard system accounts,
|
||||
such as cinder, glance, nova, and swift, or when only
|
||||
one user is ever part of a group. </para>
|
||||
one user is ever part of a group.</para>
|
||||
</note>
|
||||
</section>
|
||||
<section xml:id="running_instances">
|
||||
|
@ -5,8 +5,6 @@
|
||||
<!ENTITY mdash "—">
|
||||
<!ENTITY hellip "…">
|
||||
<!ENTITY plusmn "±">
|
||||
|
||||
|
||||
]>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
@ -128,7 +126,7 @@
|
||||
framework conventions.</para>
|
||||
<para>The first step in finding the source of an error is
|
||||
typically to search for a CRITICAL, TRACE, or ERROR
|
||||
message in the log starting at the bottom of the log file. </para>
|
||||
message in the log starting at the bottom of the log file.</para>
|
||||
<para>An example of a CRITICAL log message, with the
|
||||
corresponding TRACE (Python traceback) immediately
|
||||
following:</para>
|
||||
@ -571,7 +569,7 @@ root 24121 0.0 0.0 11688 912 pts/5 S+ 13:07 0:00 grep nova-api</programlisting>
|
||||
+-----------------------------------+------------+------------+---------------+</programlisting>
|
||||
<para>The above was generated using a custom script which
|
||||
can be found on GitHub
|
||||
(https://github.com/cybera/novac/blob/dev/libexec/novac-quota-report). </para>
|
||||
(https://github.com/cybera/novac/blob/dev/libexec/novac-quota-report).</para>
|
||||
<note>
|
||||
<para>This script is specific to a certain OpenStack
|
||||
installation and must be modified to fit your
|
||||
|
@ -5,8 +5,6 @@
|
||||
<!ENTITY mdash "—">
|
||||
<!ENTITY hellip "…">
|
||||
<!ENTITY plusmn "±">
|
||||
|
||||
|
||||
]>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
@ -180,7 +178,7 @@
|
||||
command to start each instance:</para>
|
||||
<programlisting><?db-font-size 65%?># nova reboot <uuid></programlisting>
|
||||
<note>
|
||||
<para> Any time an instance shuts down unexpectedly,
|
||||
<para>Any time an instance shuts down unexpectedly,
|
||||
it might have problems on boot. For example, the
|
||||
instance might require an <code>fsck</code> on the
|
||||
root partition. If this happens, the user can use
|
||||
@ -261,9 +259,9 @@
|
||||
Id Name State
|
||||
----------------------------------
|
||||
1 instance-00000981 running
|
||||
2 instance-000009f5 running
|
||||
2 instance-000009f5 running
|
||||
30 instance-0000274a running
|
||||
|
||||
|
||||
root@compute-node:~# virsh suspend 30
|
||||
Domain 30 suspended</programlisting>
|
||||
</listitem>
|
||||
@ -276,7 +274,7 @@ total 33M
|
||||
-rw-r--r-- 1 libvirt-qemu kvm 33M Oct 15 22:06 disk
|
||||
-rw-r--r-- 1 libvirt-qemu kvm 384K Oct 15 22:06 disk.local
|
||||
-rw-rw-r-- 1 nova nova 1.7K Oct 15 11:30 libvirt.xml
|
||||
root@compute-node:/var/lib/nova/instances/instance-0000274a# qemu-nbd -c /dev/nbd0 `pwd`/disk </programlisting>
|
||||
root@compute-node:/var/lib/nova/instances/instance-0000274a# qemu-nbd -c /dev/nbd0 `pwd`/disk</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Mount the qemu-nbd device.</para>
|
||||
@ -336,7 +334,7 @@ Id Name State
|
||||
1 instance-00000981 running
|
||||
2 instance-000009f5 running
|
||||
30 instance-0000274a paused
|
||||
|
||||
|
||||
root@compute-node:/var/lib/nova/instances/instance-0000274a# virsh resume 30
|
||||
Domain 30 resumed</programlisting>
|
||||
</listitem>
|
||||
@ -348,9 +346,9 @@ Domain 30 resumed</programlisting>
|
||||
<para>If the affected instances also had attached volumes,
|
||||
first generate a list of instance and volume
|
||||
UUIDs:</para>
|
||||
<programlisting><?db-font-size 65%?>mysql> select nova.instances.uuid as instance_uuid, cinder.volumes.id as volume_uuid, cinder.volumes.status,
|
||||
<programlisting><?db-font-size 65%?>mysql> select nova.instances.uuid as instance_uuid, cinder.volumes.id as volume_uuid, cinder.volumes.status,
|
||||
cinder.volumes.attach_status, cinder.volumes.mountpoint, cinder.volumes.display_name from cinder.volumes
|
||||
inner join nova.instances on cinder.volumes.instance_uuid=nova.instances.uuid
|
||||
inner join nova.instances on cinder.volumes.instance_uuid=nova.instances.uuid
|
||||
where nova.instances.host = 'c01.example.com';</programlisting>
|
||||
<para>You should see a result like the following:</para>
|
||||
<programlisting><?db-font-size 55%?>
|
||||
@ -460,13 +458,13 @@ inner join nova.instances on cinder.volumes.instance_uuid=nova.instances.uuid
|
||||
# swift-ring-builder object.builder remove <ip address of storage node>
|
||||
# swift-ring-builder account.builder rebalance
|
||||
# swift-ring-builder container.builder rebalance
|
||||
# swift-ring-builder object.builder rebalance </programlisting>
|
||||
# swift-ring-builder object.builder rebalance</programlisting>
|
||||
<para>Next, redistribute the ring files to the other
|
||||
nodes:</para>
|
||||
<programlisting><?db-font-size 65%?># for i in s01.example.com s02.example.com s03.example.com
|
||||
> do
|
||||
> scp *.ring.gz $i:/etc/swift
|
||||
> done </programlisting>
|
||||
> done</programlisting>
|
||||
<para>These actions effectively take the storage node out
|
||||
of the storage cluster.</para>
|
||||
<para>When the node is able to rejoin the cluster, just
|
||||
@ -751,13 +749,13 @@ inner join nova.instances on cinder.volumes.instance_uuid=nova.instances.uuid
|
||||
each OpenStack component accesses its corresponding
|
||||
database. Look for either <code>sql_connection</code>
|
||||
or simply <code>connection</code>:</para>
|
||||
<programlisting><?db-font-size 65%?># grep -hE "connection ?=" /etc/nova/nova.conf /etc/glance/glance-*.conf
|
||||
<programlisting><?db-font-size 65%?># grep -hE "connection ?=" /etc/nova/nova.conf /etc/glance/glance-*.conf
|
||||
/etc/cinder/cinder.conf /etc/keystone/keystone.conf
|
||||
sql_connection = mysql://nova:nova@cloud.alberta.sandbox.cybera.ca/nova
|
||||
sql_connection = mysql://glance:password@cloud.example.com/glance
|
||||
sql_connection = mysql://glance:password@cloud.example.com/glance
|
||||
sql_connection=mysql://cinder:password@cloud.example.com/cinder
|
||||
connection = mysql://keystone_admin:password@cloud.example.com/keystone</programlisting>
|
||||
sql_connection = mysql://nova:nova@cloud.alberta.sandbox.cybera.ca/nova
|
||||
sql_connection = mysql://glance:password@cloud.example.com/glance
|
||||
sql_connection = mysql://glance:password@cloud.example.com/glance
|
||||
sql_connection=mysql://cinder:password@cloud.example.com/cinder
|
||||
connection = mysql://keystone_admin:password@cloud.example.com/keystone</programlisting>
|
||||
<para>The connection strings take this format:</para>
|
||||
<programlisting><?db-font-size 65%?>mysql:// <username> : <password> @ <hostname> / <database name></programlisting>
|
||||
</section>
|
||||
@ -957,7 +955,7 @@ inner join nova.instances on cinder.volumes.instance_uuid=nova.instances.uuid
|
||||
that a certain instance was unable to be started. This
|
||||
ended up being a red herring because the instance was
|
||||
simply the first instance in alphabetical order, so it
|
||||
was the first instance that nova-compute would touch. </para>
|
||||
was the first instance that nova-compute would touch.</para>
|
||||
<para>Further troubleshooting showed that libvirt was not
|
||||
running at all. This made more sense. If libvirt
|
||||
wasn't running, then no instance could be virtualized
|
||||
@ -1081,7 +1079,7 @@ inner join nova.instances on cinder.volumes.instance_uuid=nova.instances.uuid
|
||||
<section xml:id="uninstalling">
|
||||
<?dbhtml stop-chunking?>
|
||||
<title>Uninstalling</title>
|
||||
<para>While we'd always recommend using your automated
|
||||
<para>While we'd always recommend using your automated
|
||||
deployment system to re-install systems from scratch,
|
||||
sometimes you do need to remove OpenStack from a system
|
||||
the hard way. Here's how:</para>
|
||||
@ -1092,10 +1090,10 @@ inner join nova.instances on cinder.volumes.instance_uuid=nova.instances.uuid
|
||||
</itemizedlist>
|
||||
<para>These steps depend on your underlying distribution,
|
||||
but in general you should be looking for 'purge' commands
|
||||
in your package manager, like <literal> aptitude purge ~c $package</literal>.
|
||||
in your package manager, like <literal>aptitude purge ~c $package</literal>.
|
||||
Following this, you can look for orphaned files in the
|
||||
directories referenced throughout this guide. For uninstalling
|
||||
the database properly, refer to the manual appropriate for
|
||||
the product in use. </para>
|
||||
</section>
|
||||
the product in use.</para>
|
||||
</section>
|
||||
</chapter>
|
||||
|
@ -5,8 +5,6 @@
|
||||
<!ENTITY mdash "—">
|
||||
<!ENTITY hellip "…">
|
||||
<!ENTITY plusmn "±">
|
||||
|
||||
|
||||
]>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
@ -32,10 +30,10 @@
|
||||
difficulty, one good initial sanity check is to make sure
|
||||
that your interfaces are up. For example:</para>
|
||||
<programlisting>$ ip a | grep state
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
|
||||
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
|
||||
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br100 state UP qlen 1000
|
||||
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
|
||||
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
|
||||
6: br100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP</programlisting>
|
||||
<para>You can safely ignore the state of virbr0, which is a
|
||||
default bridge created by libvirt and not used by
|
||||
@ -58,7 +56,7 @@
|
||||
<listitem>
|
||||
<para>The instance generates a packet and places it on
|
||||
the virtual NIC inside the instance, such as,
|
||||
eth0. </para>
|
||||
eth0.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>The packet transfers to the virtual NIC of the
|
||||
@ -72,9 +70,9 @@
|
||||
bridge on the compute node, such as,
|
||||
<code>br100.</code>
|
||||
</para>
|
||||
<para> If you run FlatDHCPManager, one bridge is on
|
||||
<para>If you run FlatDHCPManager, one bridge is on
|
||||
the compute node. If you run VlanManager, one
|
||||
bridge exists for each VLAN. </para>
|
||||
bridge exists for each VLAN.</para>
|
||||
<para>To see which bridge the packet will use, run the
|
||||
command:
|
||||
<programlisting><prompt>$</prompt> brctl show</programlisting>
|
||||
@ -160,23 +158,23 @@
|
||||
addresses:</para>
|
||||
<remark>DWC: Check formatting of the following:</remark>
|
||||
<programlisting>
|
||||
Instance
|
||||
10.0.2.24
|
||||
203.0.113.30
|
||||
Compute Node
|
||||
10.0.0.42
|
||||
203.0.113.34
|
||||
External Server
|
||||
1.2.3.4
|
||||
</programlisting>
|
||||
Instance
|
||||
10.0.2.24
|
||||
203.0.113.30
|
||||
Compute Node
|
||||
10.0.0.42
|
||||
203.0.113.34
|
||||
External Server
|
||||
1.2.3.4
|
||||
</programlisting>
|
||||
<para>Next, open a new shell to the instance and then ping the
|
||||
external host where tcpdump is running. If the network
|
||||
path to the external server and back is fully functional,
|
||||
you see something like the following:</para>
|
||||
<para>On the external server:</para>
|
||||
<programlisting>12:51:42.020227 IP (tos 0x0, ttl 61, id 0, offset 0, flags [DF], proto ICMP (1), length 84)
|
||||
203.0.113.30 > 1.2.3.4: ICMP echo request, id 24895, seq 1, length 64
|
||||
12:51:42.020255 IP (tos 0x0, ttl 64, id 8137, offset 0, flags [none], proto ICMP (1), length 84)
|
||||
203.0.113.30 > 1.2.3.4: ICMP echo request, id 24895, seq 1, length 64
|
||||
12:51:42.020255 IP (tos 0x0, ttl 64, id 8137, offset 0, flags [none], proto ICMP (1), length 84)
|
||||
1.2.3.4 > 203.0.113.30: ICMP echo reply, id 24895, seq 1, length 64</programlisting>
|
||||
<para>On the Compute Node:</para>
|
||||
<programlisting>12:51:42.019519 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84)
|
||||
@ -221,19 +219,19 @@
|
||||
networking information:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para> fixed_ips: contains each possible IP address
|
||||
<para>fixed_ips: contains each possible IP address
|
||||
for the subnet(s) added to Nova. This table is
|
||||
related to the instances table by way of the
|
||||
fixed_ips.instance_uuid column.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para> floating_ips: contains each floating IP address
|
||||
<para>floating_ips: contains each floating IP address
|
||||
that was added to nova. This table is related to
|
||||
the fixed_ips table by way of the
|
||||
floating_ips.fixed_ip_id column.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para> instances: not entirely network specific, but
|
||||
<para>instances: not entirely network specific, but
|
||||
it contains information about the instance that is
|
||||
utilizing the fixed_ip and optional
|
||||
floating_ip.</para>
|
||||
@ -307,12 +305,12 @@ wget: can't connect to remote host (169.254.169.254): Network is unreachable</pr
|
||||
<para>Several minutes after nova-network is restarted, you
|
||||
should see new dnsmasq processes running:</para>
|
||||
<programlisting># ps aux | grep dnsmasq
|
||||
nobody 3735 0.0 0.0 27540 1044 ? S 15:40 0:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file=
|
||||
--domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1
|
||||
nobody 3735 0.0 0.0 27540 1044 ? S 15:40 0:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file=
|
||||
--domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1
|
||||
--except-interface=lo --dhcp-range=set:'novanetwork',192.168.100.2,static,120s --dhcp-lease-max=256
|
||||
--dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro
|
||||
root 3736 0.0 0.0 27512 444 ? S 15:40 0:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file=
|
||||
--domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1
|
||||
root 3736 0.0 0.0 27512 444 ? S 15:40 0:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file=
|
||||
--domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1
|
||||
--except-interface=lo --dhcp-range=set:'novanetwork',192.168.100.2,static,120s --dhcp-lease-max=256
|
||||
--dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro</programlisting>
|
||||
<para>If your instances are still not able to obtain IP
|
||||
@ -323,9 +321,9 @@ root 3736 0.0 0.0 27512 444 ? S 15:40 0:00 /usr/sbin/dnsmasq --strict-order --bi
|
||||
see the dnsmasq output. If dnsmasq is seeing the request
|
||||
properly and handing out an IP, the output looks
|
||||
like:</para>
|
||||
<programlisting>Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPDISCOVER(br100) fa:16:3e:56:0b:6f
|
||||
Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPOFFER(br100) 192.168.100.3 fa:16:3e:56:0b:6f
|
||||
Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPREQUEST(br100) 192.168.100.3 fa:16:3e:56:0b:6f
|
||||
<programlisting>Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPDISCOVER(br100) fa:16:3e:56:0b:6f
|
||||
Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPOFFER(br100) 192.168.100.3 fa:16:3e:56:0b:6f
|
||||
Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPREQUEST(br100) 192.168.100.3 fa:16:3e:56:0b:6f
|
||||
Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPACK(br100) 192.168.100.3 fa:16:3e:56:0b:6f test</programlisting>
|
||||
<para>If you do not see the DHCPDISCOVER, a problem exists
|
||||
with the packet getting from the instance to the machine
|
||||
@ -352,11 +350,11 @@ Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPACK(br100) 192.168.100.3 fa:16:3e
|
||||
--dhcp-lease-max=253 --dhcp-no-override
|
||||
nobody 2438 0.0 0.0 27540 1096 ? S Feb26 0:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file=
|
||||
--domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1
|
||||
--except-interface=lo --dhcp-range=set:'novanetwork',192.168.100.2,static,120s --dhcp-lease-max=256
|
||||
--except-interface=lo --dhcp-range=set:'novanetwork',192.168.100.2,static,120s --dhcp-lease-max=256
|
||||
--dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro
|
||||
root 2439 0.0 0.0 27512 472 ? S Feb26 0:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file=
|
||||
--domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1
|
||||
--except-interface=lo --dhcp-range=set:'novanetwork',192.168.100.2,static,120s --dhcp-lease-max=256
|
||||
root 2439 0.0 0.0 27512 472 ? S Feb26 0:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file=
|
||||
--domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1
|
||||
--except-interface=lo --dhcp-range=set:'novanetwork',192.168.100.2,static,120s --dhcp-lease-max=256
|
||||
--dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro</programlisting>
|
||||
<para>If the problem does not seem to be related to dnsmasq
|
||||
itself, at this point, use tcpdump on the interfaces to
|
||||
@ -387,8 +385,8 @@ root 2439 0.0 0.0 27512 472 ? S Feb26 0:00 /usr/sbin/dnsmasq --strict-order --bi
|
||||
<para>When debugging DNS issues, start by making sure the host
|
||||
where the dnsmasq process for that instance runs is able
|
||||
to correctly resolve. If the host cannot resolve, then the
|
||||
instances won't be able either. </para>
|
||||
<para>A quick way to check if DNS is working is to
|
||||
instances won't be able either.</para>
|
||||
<para>A quick way to check if DNS is working is to
|
||||
resolve a hostname inside your instance using the
|
||||
<code>host</code> command. If DNS is working, you
|
||||
should see:</para>
|
||||
|
@ -692,7 +692,7 @@
|
||||
purposefully show and administrative user where this value
|
||||
is "admin".</para><important><para>The "admin" is global not per project so granting a user the
|
||||
admin role in any project gives the administrative
|
||||
rights across the whole cloud. </para></important>
|
||||
rights across the whole cloud.</para></important>
|
||||
<para>Typical use is to only create administrative users in a
|
||||
single project, by convention the "admin" project which is
|
||||
created by default during cloud setup. If your
|
||||
|
@ -47,13 +47,13 @@
|
||||
<link
|
||||
xlink:href="http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf"
|
||||
>NIST Cloud Computing Definition</link>
|
||||
(http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf) </para>
|
||||
(http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf)</para>
|
||||
<para>
|
||||
<emphasis role="bold"> Python </emphasis>
|
||||
</para>
|
||||
<para>
|
||||
<link xlink:href="http://www.diveintopython.net">Dive Into
|
||||
Python</link> (http://www.diveintopython.net) </para>
|
||||
Python</link> (http://www.diveintopython.net)</para>
|
||||
<para>
|
||||
<emphasis role="bold"> Networking </emphasis>
|
||||
</para>
|
||||
@ -61,33 +61,33 @@
|
||||
<link
|
||||
xlink:href="http://www.pearsonhighered.com/educator/product/TCPIP-Illustrated-Volume-1-The-Protocols/9780321336316.page"
|
||||
>TCP/IP Illustrated</link>
|
||||
(http://www.pearsonhighered.com/educator/product/TCPIP-Illustrated-Volume-1-The-Protocols/9780321336316.page) </para>
|
||||
(http://www.pearsonhighered.com/educator/product/TCPIP-Illustrated-Volume-1-The-Protocols/9780321336316.page)</para>
|
||||
<para>
|
||||
<link xlink:href="http://nostarch.com/tcpip.htm">The TCP/IP
|
||||
Guide</link> (http://nostarch.com/tcpip.htm) </para>
|
||||
Guide</link> (http://nostarch.com/tcpip.htm)</para>
|
||||
<para>
|
||||
<link xlink:href="http://danielmiessler.com/study/tcpdump/">A
|
||||
tcpdump Tutorial and Primer</link>
|
||||
(http://danielmiessler.com/study/tcpdump/) </para>
|
||||
(http://danielmiessler.com/study/tcpdump/)</para>
|
||||
<para>
|
||||
<emphasis role="bold"> Systems administration </emphasis>
|
||||
</para>
|
||||
<para>
|
||||
<link xlink:href="http://www.admin.com/">UNIX and Linux
|
||||
Systems Administration Handbook</link>
|
||||
(http://www.admin.com/) </para>
|
||||
(http://www.admin.com/)</para>
|
||||
<para>
|
||||
<emphasis role="bold"> Virtualization </emphasis>
|
||||
</para>
|
||||
<para>
|
||||
<link xlink:href="http://nostarch.com/xen.htm">The Book of
|
||||
Xen</link> (http://nostarch.com/xen.htm) </para>
|
||||
Xen</link> (http://nostarch.com/xen.htm)</para>
|
||||
<para>
|
||||
<emphasis role="bold"> Configuration management </emphasis>
|
||||
</para>
|
||||
<para>
|
||||
<link xlink:href="http://docs.puppetlabs.com/">Puppet Labs
|
||||
Documentation</link> (http://docs.puppetlabs.com/) </para>
|
||||
Documentation</link> (http://docs.puppetlabs.com/)</para>
|
||||
<para>
|
||||
<link xlink:href="http://www.apress.com/9781430230571">Pro
|
||||
Puppet</link> (http://www.apress.com/9781430230571)
|
||||
|
@ -5,8 +5,6 @@
|
||||
<!ENTITY mdash "—">
|
||||
<!ENTITY hellip "…">
|
||||
<!ENTITY plusmn "±">
|
||||
|
||||
|
||||
]>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
@ -596,7 +594,7 @@
|
||||
<section xml:id="contribute_to_docs">
|
||||
<title>How to Contribute to the Documentation</title>
|
||||
<para>OpenStack documentation efforts encompass operator and
|
||||
administrator docs, API docs, and user docs. </para>
|
||||
administrator docs, API docs, and user docs.</para>
|
||||
<para>The genesis of this book was an in-person event, but now
|
||||
that the book is in your hands we want you to contribute
|
||||
to it. OpenStack documentation follows the coding
|
||||
|
@ -5,8 +5,6 @@
|
||||
<!ENTITY mdash "—">
|
||||
<!ENTITY hellip "…">
|
||||
<!ENTITY plusmn "±">
|
||||
|
||||
|
||||
]>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
@ -51,7 +49,7 @@
|
||||
same thing is done when using the STDIN redirection
|
||||
such as shown in the example.</para>
|
||||
<para>Run the following command to view the properties of
|
||||
existing images: </para>
|
||||
existing images:</para>
|
||||
<programlisting><?db-font-size 65%?>$ glance details</programlisting>
|
||||
</section>
|
||||
<section xml:id="delete_images">
|
||||
@ -106,7 +104,6 @@
|
||||
</section>
|
||||
|
||||
<section xml:id="flavors">
|
||||
|
||||
<title>Flavors</title>
|
||||
|
||||
<para>Virtual hardware templates are called "flavors" in OpenStack, defining sizes for RAM,
|
||||
@ -251,7 +248,6 @@
|
||||
modify a flavor by deleting an existing flavor and creating a new one with the same
|
||||
name.</para>
|
||||
</simplesect>
|
||||
|
||||
</section>
|
||||
<?hard-pagebreak?>
|
||||
<section xml:id="security_groups">
|
||||
@ -307,12 +303,12 @@
|
||||
|
||||
<programlisting><prompt>$</prompt> nova secgroup-list-rules open</programlisting>
|
||||
|
||||
<programlisting><?db-font-size 65%?>+-------------+-----------+---------+-----------+--------------+
|
||||
| IP Protocol | From Port | To Port | IP Range | Source Group |
|
||||
+-------------+-----------+---------+-----------+--------------+
|
||||
| icmp | -1 | 255 | 0.0.0.0/0 | |
|
||||
| tcp | 1 | 65535 | 0.0.0.0/0 | |
|
||||
| udp | 1 | 65535 | 0.0.0.0/0 | |
|
||||
<programlisting><?db-font-size 65%?>+-------------+-----------+---------+-----------+--------------+
|
||||
| IP Protocol | From Port | To Port | IP Range | Source Group |
|
||||
+-------------+-----------+---------+-----------+--------------+
|
||||
| icmp | -1 | 255 | 0.0.0.0/0 | |
|
||||
| tcp | 1 | 65535 | 0.0.0.0/0 | |
|
||||
| udp | 1 | 65535 | 0.0.0.0/0 | |
|
||||
+-------------+-----------+---------+-----------+--------------+ </programlisting>
|
||||
<para>These rules are all "allow" type rules as the default is
|
||||
deny. The first column is the IP protocol (one of icmp,
|
||||
@ -378,20 +374,20 @@ $ nova secgroup-add-rule global_http tcp 80 80 0.0.0.0/0
|
||||
<para>The inverse operation is called secgroup-delete-rule,
|
||||
using the same format. Whole security groups can be
|
||||
removed with secgroup-delete.</para>
|
||||
<para> To create security group rules for a cluster of
|
||||
instances: </para>
|
||||
<para>To create security group rules for a cluster of
|
||||
instances:</para>
|
||||
<para>SourceGroups are a special dynamic way of defining the
|
||||
CIDR of allowed sources. The user specifies a SourceGroup
|
||||
(Security Group name), all the users' other Instances
|
||||
using the specified SourceGroup are selected dynamically.
|
||||
This alleviates the need for a individual rules to allow
|
||||
each new member of the cluster.usage: </para>
|
||||
each new member of the cluster.usage:</para>
|
||||
<para>usage: nova secgroup-add-group-rule <secgroup>
|
||||
<source-group> <ip-proto> <from-port>
|
||||
<to-port> </para>
|
||||
<programlisting><prompt>$</prompt> nova secgroup-add-group-rule cluster global-http tcp 22 22</programlisting>
|
||||
<para> The "cluster" rule allows ssh access from any other
|
||||
instance that uses the "global-http" group. </para>
|
||||
<para>The "cluster" rule allows ssh access from any other
|
||||
instance that uses the "global-http" group.</para>
|
||||
</section>
|
||||
<?hard-pagebreak?>
|
||||
<section xml:id="user_facing_block_storage">
|
||||
@ -466,7 +462,7 @@ Optional snapshot description. (Default=None)</programlisting>
|
||||
volume's UUID. First try the log files on the cloud
|
||||
controller and then try the storage node where they
|
||||
volume was attempted to be created:</para>
|
||||
<programlisting><?db-font-size 65%?><prompt>#</prompt> grep 903b85d0-bacc-4855-a261-10843fc2d65b /var/log/cinder/*.log </programlisting>
|
||||
<programlisting><?db-font-size 65%?><prompt>#</prompt> grep 903b85d0-bacc-4855-a261-10843fc2d65b /var/log/cinder/*.log</programlisting>
|
||||
</section>
|
||||
</section>
|
||||
<section xml:id="instances">
|
||||
@ -592,7 +588,7 @@ Optional snapshot description. (Default=None)</programlisting>
|
||||
<programlisting><?db-font-size 65%?><prompt>$</prompt> nova keypair-add --pub-key mykey.pub mykey</programlisting>
|
||||
<para>You must have the matching private key to access
|
||||
instances associated with this key.</para>
|
||||
<para> To associate a key with an instance on boot add
|
||||
<para>To associate a key with an instance on boot add
|
||||
--key_name mykey to your command line for
|
||||
example:</para>
|
||||
<programlisting><?db-font-size 65%?><prompt>$</prompt> nova boot --image ubuntu-cloudimage --flavor 1 --key_name mykey</programlisting>
|
||||
@ -649,7 +645,7 @@ Optional snapshot description. (Default=None)</programlisting>
|
||||
system and then passed in at instance creation with
|
||||
the flag --user-data <user-data-file> for
|
||||
example:</para>
|
||||
<programlisting><?db-font-size 65%?><prompt>$</prompt> nova boot --image ubuntu-cloudimage --flavor 1 --user-data mydata.file </programlisting>
|
||||
<programlisting><?db-font-size 65%?><prompt>$</prompt> nova boot --image ubuntu-cloudimage --flavor 1 --user-data mydata.file</programlisting>
|
||||
<para>Arbitrary local files can also be placed into the
|
||||
instance file system at creation time using the --file
|
||||
<dst-path=src-path> option. You may store up to
|
||||
@ -709,9 +705,9 @@ Optional snapshot description. (Default=None)</programlisting>
|
||||
<para>You can attach block storage to instances from the
|
||||
dashboard on the <guilabel>Volumes</guilabel> page. Click
|
||||
the <guibutton>Edit Attachments</guibutton> action next to
|
||||
the volume you wish to attach. </para>
|
||||
the volume you wish to attach.</para>
|
||||
<para>To perform this action from command line, run the
|
||||
following command: </para>
|
||||
following command:</para>
|
||||
<programlisting><prompt>$</prompt> nova volume-attach <server> <volume> </programlisting>
|
||||
<para>You can also specify block device mapping at instance
|
||||
boot time through the nova command-line client, as
|
||||
@ -797,7 +793,7 @@ Optional snapshot description. (Default=None)</programlisting>
|
||||
xlink:href="https://bugs.launchpad.net/nova/+bug/1163566"
|
||||
>1163566</link>
|
||||
(https://bugs.launchpad.net/nova/+bug/1163566) you must
|
||||
specify an image when booting from a volume in Horizon,
|
||||
specify an image when booting from a volume in Horizon,
|
||||
even though this image is not used.</para>
|
||||
<para>To boot normally from an image and attach block storage,
|
||||
map to a device other than vda.</para>
|
||||
|
@ -25,7 +25,7 @@
|
||||
computing platform that meets the needs of public and private cloud
|
||||
providers regardless of size. OpenStack services control large pools
|
||||
of compute, storage, and networking resources throughout a data
|
||||
center. </para>
|
||||
center.</para>
|
||||
<para>Each service provides a REST API so that all these resources can
|
||||
be managed through a dashboard that gives administrators control
|
||||
while empowering users to provision resources through a web
|
||||
@ -71,7 +71,7 @@
|
||||
Linux machines for networking. You must install and maintain a
|
||||
MySQL database, and occasionally run SQL queries against it.
|
||||
</para>
|
||||
<para> One of the most complex aspects of an OpenStack cloud is the
|
||||
<para>One of the most complex aspects of an OpenStack cloud is the
|
||||
networking configuration. You should be familiar with concepts such
|
||||
as DHCP, Linux bridges, VLANs, and iptables. You must also have
|
||||
access to a network hardware expert who can configure the switches
|
||||
@ -121,7 +121,7 @@
|
||||
users, give them quotas to parcel out resources, and so on.</para>
|
||||
<para>Chapter 10: User-facing Operations: This chapter moves along to
|
||||
show you how to use OpenStack cloud resources and train your users
|
||||
as well. </para>
|
||||
as well.</para>
|
||||
<para>Chapter 11: Maintenance, Failures, and Debugging: This chapter
|
||||
goes into the common failures the authors have seen while running
|
||||
clouds in production, including troubleshooting.</para>
|
||||
@ -132,10 +132,10 @@
|
||||
debugging related services like DHCP and DNS.</para>
|
||||
<para>Chapter 13: Logging and Monitoring: This chapter shows you where
|
||||
OpenStack places logs and how to best to read and manage logs for
|
||||
monitoring purposes. </para>
|
||||
monitoring purposes.</para>
|
||||
<para>Chapter 14: Backup and Recovery: This chapter describes what you
|
||||
need to back up within OpenStack as well as best practices for
|
||||
recovering backups. </para>
|
||||
recovering backups.</para>
|
||||
<para>Chapter 15: Customize: When you need to get a specialized feature
|
||||
into OpenStack, this chapter describes how to use DevStack to write
|
||||
custom middleware or a custom scheduler to rebalance your
|
||||
@ -179,7 +179,7 @@
|
||||
non-trivial OpenStack cloud. After you read this guide,
|
||||
you'll know which questions to ask and how to organize
|
||||
your compute, networking, storage resources, and the
|
||||
associated software packages. </para>
|
||||
associated software packages.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Perform the day-to-day tasks required to administer a
|
||||
@ -191,7 +191,7 @@
|
||||
the <link xlink:href="http://www.booksprints.net">Book Sprint
|
||||
site</link>. Your authors cobbled this book together in five
|
||||
days during February 2013, fueled by caffeine and the best take-out
|
||||
food that Austin, Texas could offer. </para>
|
||||
food that Austin, Texas could offer.</para>
|
||||
<para>On the first day we filled white boards with colorful sticky notes
|
||||
to start to shape this nebulous book about how to architect and
|
||||
operate clouds. <informalfigure>
|
||||
@ -310,7 +310,7 @@
|
||||
<literal>ops-guide</literal> tag to indicate that the
|
||||
bug is in this guide. You can assign the bug to yourself
|
||||
if you know how to fix it. Also, a member of the OpenStack
|
||||
doc-core team can triage the doc bug. </para>
|
||||
doc-core team can triage the doc bug.</para>
|
||||
</section>
|
||||
|
||||
</preface>
|
||||
|
Loading…
x
Reference in New Issue
Block a user