diff --git a/doc/openstack-ops/app_crypt.xml b/doc/openstack-ops/app_crypt.xml
index 26d5876a..13617cc0 100644
--- a/doc/openstack-ops/app_crypt.xml
+++ b/doc/openstack-ops/app_crypt.xml
@@ -23,7 +23,7 @@
automated: Cobbler deployed the OS on the bare metal,
bootstrapped it, and Puppet took over from there. I had
run the deployment scenario so many times in practice and
- took for granted that everything was working.
+ took for granted that everything was working.
On my last day in Kelowna, I was in a conference call
from my hotel. In the background, I was fooling around on
the new cloud. I launched an instance and logged in.
@@ -42,7 +42,7 @@
the unfortunate conclusion that this cloud did indeed have
a problem. Even worse, my time was up in Kelowna and I had
to return back to Calgary.
- Where do you even begin troubleshooting something like
+ Where do you even begin troubleshooting something like
this? An instance just randomly locks when a command is
issued. Is it the image? Nope — it happens on all images.
Is it the compute node? Nope — all nodes. Is the instance
@@ -105,7 +105,7 @@
the public internet, it should no longer have a VLAN.
False. Uh oh. It looked as though the VLAN part of the
packet was not being removed.
- That made no sense.
+ That made no sense.While bouncing this idea around in our heads, I was
randomly typing commands on the compute node:
$ ip a
@@ -153,7 +153,7 @@
A few nights later, it happened again.We reviewed both sets of logs. The one thing that stood
out the most was DHCP. At the time, OpenStack, by default,
- set DHCP leases for one minute (it's now two minutes).
+ set DHCP leases for one minute (it's now two minutes).
This means that every instance
contacts the cloud controller (DHCP server) to renew its
fixed IP. For some reason, this instance could not renew
@@ -376,7 +376,7 @@
This past Valentine's Day, I received an alert that a
compute node was no longer available in the cloud
— meaning,
- $ nova-manage service list
+ $nova-manage service list
showed this particular node with a status of
XXX.I logged into the cloud controller and was able to both
@@ -431,7 +431,7 @@
coming from the two compute nodes and immediately shut the
ports down to prevent spanning tree loops:
Feb 15 01:40:18 SW-1 Stp: %SPANTREE-4-BLOCK_BPDUGUARD: Received BPDU packet on Port-Channel35 with BPDU guard enabled. Disabling interface. (source mac fa:16:3e:24:e7:22)
-Feb 15 01:40:18 SW-1 Ebra: %ETH-4-ERRDISABLE: bpduguard error detected on Port-Channel35.
+Feb 15 01:40:18 SW-1 Ebra: %ETH-4-ERRDISABLE: bpduguard error detected on Port-Channel35.
Feb 15 01:40:18 SW-1 Mlag: %MLAG-4-INTF_INACTIVE_LOCAL: Local interface Port-Channel35 is link down. MLAG 35 is inactive.
Feb 15 01:40:18 SW-1 Ebra: %LINEPROTO-5-UPDOWN: Line protocol on Interface Port-Channel35 (Server35), changed state to down
Feb 15 01:40:19 SW-1 Stp: %SPANTREE-6-INTERFACE_DEL: Interface Port-Channel35 has been removed from instance MST0
diff --git a/doc/openstack-ops/bk_ops_guide.xml b/doc/openstack-ops/bk_ops_guide.xml
index 962ed1c9..1ccbb098 100644
--- a/doc/openstack-ops/bk_ops_guide.xml
+++ b/doc/openstack-ops/bk_ops_guide.xml
@@ -1,5 +1,5 @@
-
diff --git a/doc/openstack-ops/ch_arch_cloud_controller.xml b/doc/openstack-ops/ch_arch_cloud_controller.xml
index 37f1d4fd..6fffd082 100644
--- a/doc/openstack-ops/ch_arch_cloud_controller.xml
+++ b/doc/openstack-ops/ch_arch_cloud_controller.xml
@@ -5,8 +5,6 @@
-
-
]>
cloud controller.
+ cloud controller.For more details about the overall architecture, see the
- .
+ .As described in this guide, the cloud controller is a single
node that hosts the databases, message queue service,
authentication and authorization service, image management
@@ -29,7 +27,7 @@
The cloud controller provides the central management system
for multi-node OpenStack deployments. Typically the cloud
controller manages authentication and sends messaging to all
- the systems through a message queue.
+ the systems through a message queue.For our example, the cloud controller has a collection of
nova-* components that represent the global
state of the cloud, talks to services such as authentication,
@@ -44,7 +42,7 @@
Hardware ConsiderationsA cloud controller's hardware can be the same as a
compute node, though you may want to further specify based
- on the size and type of cloud that you run.
+ on the size and type of cloud that you run.It's also possible to use virtual machines for all or
some of the services that the cloud controller manages,
such as the message queuing. In this guide, we assume that
@@ -273,12 +271,12 @@
EC2 compatibility APIs, or just the OpenStack APIs. One
issue you might encounter when running both APIs is an
inconsistent experience when referring to images and
- instances.
+ instances.For example, the EC2 API refers to instances using IDs
that contain hexadecimal whereas the OpenStack API uses
names and digits. Similarly, the EC2 API tends to rely on
DNS aliases for contacting virtual machines, as opposed to
- OpenStack which typically lists IP addresses.
+ OpenStack which typically lists IP addresses.
If OpenStack is not set up in the right way, it is
simple to have scenarios where users are unable to contact
their instances due to only having an incorrect DNS alias.
@@ -320,7 +318,7 @@
flavors) into different sized
physical nova-compute nodes is a challenging problem -
researched generically in Computer Science as a packing
- problem.
+ problem.
You can use various techniques to handle this problem
though solving this problem is out of the scope of this
book. To support your scheduling choices, OpenStack
@@ -359,7 +357,7 @@
store the images as files.
- S3. Allows you to fetch images from Amazon S3.
+ S3. Allows you to fetch images from Amazon S3.HTTP. Allows you to fetch images from a web
@@ -381,7 +379,6 @@
API servers (including their admin endpoints) over the
network.
-
Authentication and AuthorizationThe concepts supporting OpenStack's authentication and
@@ -389,7 +386,7 @@
used systems of a similar nature. Users have credentials
they can use to authenticate, and they can be a member of
one or more groups (known as projects or tenants
- interchangeably).
+ interchangeably).
For example, a cloud administrator might be able to list
all instances in the cloud, whereas a user can only see
those in their current group. Resources quotas, such as
@@ -438,7 +435,7 @@
networking where the cloud controller is the network
gateway for all instances, then the Cloud Controller must
support the total amount of traffic that travels between
- your cloud and the public Internet.
+ your cloud and the public Internet.
We recommend that you use a fast NIC, such as 10 GB. You
can also choose to use two 10 GB NICs and bond them
together. While you might not be able to get a full bonded
diff --git a/doc/openstack-ops/ch_arch_compute_nodes.xml b/doc/openstack-ops/ch_arch_compute_nodes.xml
index f99912a4..e4c7f619 100644
--- a/doc/openstack-ops/ch_arch_compute_nodes.xml
+++ b/doc/openstack-ops/ch_arch_compute_nodes.xml
@@ -5,8 +5,6 @@
-
-
]>
+ appropriate in your case.
@@ -62,7 +60,7 @@
hypervisor is your current usage or experience. Aside from
that, there are practical concerns to do with feature
parity, documentation, and the level of community
- experience.
+ experience.
For example, KVM is the most widely adopted hypervisor
in the OpenStack community. Besides KVM, more deployments
exist running Xen, LXC, VMWare and Hyper-V than the others
@@ -93,7 +91,7 @@
instantiated instance runs. There are three main
approaches to providing this temporary-style storage, and
it is important to understand the implications of the
- choice.
+ choice.
They are:
@@ -147,7 +145,7 @@
long as you don't have any instances currently running
on a compute host, you can take it offline or wipe it
completely without having any effect on the rest of
- your cloud.
+ your cloud.
However, if you are more restricted in the number of
physical hosts you have available for creating your
cloud and you want to be able to dedicate as many of
@@ -198,7 +196,7 @@
distributed file system ties the disks from each
compute node into a single mount. The main advantage
of this option is that it scales to external storage
- when you require additional storage.
+ when you require additional storage.
However, this option has several downsides:
@@ -271,7 +269,7 @@
ability to seamlessly move instances from one physical
host to another, a necessity for performing upgrades
that require reboots of the compute hosts, but only
- works well with shared storage.
+ works well with shared storage.
Live migration can be also done with non-shared storage, using a feature known as
KVM live block migration. While an earlier implementation
of block-based migration in KVM and QEMU was considered unreliable, there is a
@@ -283,7 +281,7 @@
Choice of File SystemIf you want to support shared storage live
migration, you'll need to configure a distributed file
- system.
+ system.Possible options include:
@@ -330,7 +328,7 @@
that the scheduler allocates instances to a physical node
as long as the total amount of RAM associated with the
instances is less than 1.5 times the amount of RAM
- available on the physical node.
+ available on the physical node.
For example, if a physical node has 48 GB of RAM, the
scheduler allocates instances to that node until the sum
of the RAM associated with the instances reaches 72 GB
@@ -344,7 +342,7 @@
Logging is detailed more fully in . However it is an important design
consideration to take into account before commencing
- operations of your cloud.
+ operations of your cloud.OpenStack produces a great deal of useful logging
information, however, in order for it to be useful for
operations purposes you should consider having a central
diff --git a/doc/openstack-ops/ch_arch_example.xml b/doc/openstack-ops/ch_arch_example.xml
index cf19e140..2b90a144 100644
--- a/doc/openstack-ops/ch_arch_example.xml
+++ b/doc/openstack-ops/ch_arch_example.xml
@@ -343,7 +343,7 @@
>OpenStack Install and Deploy Manual - Ubuntu
(http://docs.openstack.org/havana/install-guide/install/apt/),
which contains a step-by-step guide on how to manually install
- the OpenStack packages and dependencies on your cloud.
+ the OpenStack packages and dependencies on your cloud.
While it is important for an operator to be familiar with
the steps involved in deploying OpenStack, we also strongly
encourage you to evaluate configuration management tools such
diff --git a/doc/openstack-ops/ch_arch_network_design.xml b/doc/openstack-ops/ch_arch_network_design.xml
index df19fcdd..71104ce8 100644
--- a/doc/openstack-ops/ch_arch_network_design.xml
+++ b/doc/openstack-ops/ch_arch_network_design.xml
@@ -5,8 +5,6 @@
-
-
]>
Flat
-
Extremely simple. No DHCP
+
Extremely simple.No DHCP
broadcasts.
Requires file injection into the
- instance. Limited to certain
+ instance.Limited to certain
distributions of Linux.
Difficult to configure and is not
recommended.
@@ -187,7 +185,7 @@
FlatDHCP
Relatively simple to setup.
- Standard networking. Works
+ Standard networking.Works
with all operating systems.
Requires its own DHCP broadcast
domain.
@@ -198,24 +196,24 @@
VLANs.
More complex to set up.
Requires its own DHCP broadcast
- domain. Requires many VLANs
+ domain.Requires many VLANs
to be trunked onto a single
- port. Standard VLAN number
- limitation. Switches must
+ port.Standard VLAN number
+ limitation.Switches must
support 802.1q VLAN tagging.
FlatDHCP Multi-host HA
Networking failure is isolated to the
VMs running on the hypervisor
- affected. DHCP traffic can be
+ affected.DHCP traffic can be
isolated within an individual
- host. Network traffic is
+ host.Network traffic is
distributed to the compute
nodes.
-
More complex to set up. By
+
More complex to set up.By
default, compute nodes need public IP
- addresses. Options must be
+ addresses.Options must be
carefully configured for live migration to
work with networking.
diff --git a/doc/openstack-ops/ch_arch_provision.xml b/doc/openstack-ops/ch_arch_provision.xml
index b72dcae7..1dd15b2b 100644
--- a/doc/openstack-ops/ch_arch_provision.xml
+++ b/doc/openstack-ops/ch_arch_provision.xml
@@ -5,8 +5,6 @@
-
-
]>
A critical part of a cloud's scalability is the amount of
effort that it takes to run your cloud. To minimize the
operational cost of running your cloud, set up and use an
- automated deployment and configuration infrastructure.
+ automated deployment and configuration infrastructure.
This infrastructure includes systems to automatically
install the operating system's initial configuration and later
coordinate the configuration of all services automatically and
@@ -162,7 +160,7 @@
management tools ensures that components of the cloud
systems are in particular states, in addition to
simplifying deployment, and configuration change
- propagation.
+ propagation.
These tools also make it possible to test and roll back
changes, as they are fully repeatable. Conveniently, a
large body of work has been done by the OpenStack
@@ -181,7 +179,7 @@
to the servers running the cloud, and many don't
necessarily enjoy visiting the data center. OpenStack
should be entirely remotely configurable, but sometimes
- not everything goes according to plan.
+ not everything goes according to plan.
In this instance, having an out-of-band access into
nodes running OpenStack components, is a boon. The IPMI
protocol is the de-facto standard here, and acquiring
diff --git a/doc/openstack-ops/ch_arch_scaling.xml b/doc/openstack-ops/ch_arch_scaling.xml
index 565c75c8..2b4b4dae 100644
--- a/doc/openstack-ops/ch_arch_scaling.xml
+++ b/doc/openstack-ops/ch_arch_scaling.xml
@@ -5,8 +5,6 @@
-
-
]>
However, you need more than the core count alone to
estimate the load that the API services, database servers,
and queue servers are likely to encounter. You must also
- consider the usage patterns of your cloud.
+ consider the usage patterns of your cloud.
As a specific example, compare a cloud that supports a
managed web hosting platform with one running integration
tests for a development project that creates one VM per
@@ -113,7 +111,6 @@
constant heavy load on the cloud controller. You must
consider your average VM lifetime, as a larger number
generally means less load on the cloud controller.
-
Aside from the creation and termination of VMs, you must
consider the impact of users accessing the service
— particularly on nova-api and its associated database.
@@ -249,7 +246,7 @@
A different API endpoint for
- every region.
+ every region.Each region has a full nova
@@ -335,7 +332,7 @@
Availability Zones and Host AggregatesYou can use availability zones, host aggregates, or
- both to partition a nova deployment.
+ both to partition a nova deployment.Availability zones are implemented through and
configured in a similar way to host aggregates.However, you use an availability zone and a host
@@ -347,7 +344,7 @@
and provides a form of physical isolation and
redundancy from other availability zones, such
as by using separate power supply or network
- equipment.
+ equipment.
You define the availability zone in which a
specified Compute host resides locally on each
server. An availability zone is commonly used
@@ -357,7 +354,7 @@
power source, you can put servers in those
racks in their own availability zone.
Availability zones can also help separate
- different classes of hardware.
+ different classes of hardware.
When users provision resources, they can
specify from which availability zone they
would like their instance to be built. This
@@ -384,7 +381,7 @@
provide information for use with the
nova-scheduler. For example, you might use a
host aggregate to group a set of hosts that
- share specific flavors or images.
+ share specific flavors or images.
Previously, all services had an availability zone. Currently,
only the nova-compute service has its own
@@ -394,11 +391,11 @@
appear in their own internal availability zone
(CONF.internal_service_availability_zone):
- nova host-list (os-hosts)
+ nova host-list (os-hosts)euca-describe-availability-zones
- verbose
+ verbosenova-manage service list
@@ -408,7 +405,7 @@
(non-verbose).
CONF.node_availability_zone has been renamed to
CONF.default_availability_zone and is only used by
- the nova-api and nova-scheduler services.
+ the nova-api and nova-scheduler services.
CONF.node_availability_zone still works but is
deprecated.
diff --git a/doc/openstack-ops/ch_arch_storage.xml b/doc/openstack-ops/ch_arch_storage.xml
index 368804b7..5c3cab0c 100644
--- a/doc/openstack-ops/ch_arch_storage.xml
+++ b/doc/openstack-ops/ch_arch_storage.xml
@@ -10,8 +10,7 @@
-'>
-
+'>
]>
storage system, as an alternative to storing the
images on a file system.
-
+
Block StorageBlock storage (sometimes referred to as volume
storage) exposes a block device to the user. Users
interact with block storage by attaching volumes to
- their running VM instances.
+ their running VM instances.
These volumes are persistent: they can be detached
from one instance and re-attached to another, and the
data remains intact. Block storage is implemented in
@@ -157,7 +156,7 @@ format="SVG" scale="60"/>
solution before, have encountered this form of
networked storage. In the Unix world, the most common
form of this is NFS. In the Windows world, the most
- common form is called CIFS (previously, SMB).
+ common form is called CIFS (previously, SMB).
OpenStack clouds do not present file-level storage
to end users. However, it is important to consider
file-level storage for storing instances under
@@ -210,7 +209,7 @@ format="SVG" scale="60"/>
To deploy your storage by using entirely commodity
hardware, you can use a number of open-source packages, as
- shown in the following table:
+ shown in the following table:
@@ -294,7 +293,7 @@ format="SVG" scale="60"/>
xlink:title="OpenStack wiki"
xlink:href="https://wiki.openstack.org/wiki/CinderSupportMatrix"
>OpenStack wiki
- (https://wiki.openstack.org/wiki/CinderSupportMatrix).
+ (https://wiki.openstack.org/wiki/CinderSupportMatrix).
Also, you need to decide whether you want to support
object storage in your cloud. The two common use cases for
providing object storage in a compute cloud are:
@@ -312,7 +311,7 @@ format="SVG" scale="60"/>
Commodity Storage Back-end TechnologiesThis section provides a high-level overview of the
differences among the different commodity storage
- back-end technologies.
+ back-end technologies.
@@ -330,7 +329,7 @@ format="SVG" scale="60"/>
Dashboard interface), and better support for
multiple data center deployment through
support of asynchronous eventual consistency
- replication.
+ replication.
Therefore, if you eventually plan on
distributing your storage cluster across
multiple data centers, if you need unified
@@ -347,7 +346,7 @@ format="SVG" scale="60"/>
across commodity storage nodes. Ceph was
originally developed by one of the founders of
DreamHost and is currently used in production
- there.
+ there.
Ceph was designed to expose different types
of storage interfaces to the end-user: it
supports object storage, block storage, and
@@ -358,13 +357,13 @@ format="SVG" scale="60"/>
back-end for Cinder block storage, as well as
back-end storage for Glance images. Ceph
supports "thin provisioning", implemented
- using copy-on-write.
+ using copy-on-write.
This can be useful when booting from volume
because a new volume can be provisioned very
quickly. Ceph also supports keystone-based
authentication (as of version 0.56), so it can
be a seamless swap in for the default
- OpenStack Swift implementation.
+ OpenStack Swift implementation.
Ceph's advantages are that it gives the
administrator more fine-grained control over
data distribution and replication strategies,
@@ -377,7 +376,7 @@ format="SVG" scale="60"/>
xlink:href="http://ceph.com/docs/master/faq/"
>not yet recommended
(http://ceph.com/docs/master/faq/) for use in
- production deployment by the Ceph project.
+ production deployment by the Ceph project.
If you wish to manage your object and block
storage within a single system, or if you wish
to support fast boot-from-volume, you should
@@ -391,7 +390,7 @@ format="SVG" scale="60"/>
storage into one unified file and object
storage solution, which is called Gluster UFO.
Gluster UFO uses a customizes version of Swift
- that uses Gluster as the back-end.
+ that uses Gluster as the back-end.
The main advantage of using Gluster UFO over
regular Swift is if you also want to support a
distributed file system, either to support
@@ -408,7 +407,7 @@ format="SVG" scale="60"/>
physical disks to expose logical volumes to
the operating system. The LVM (Logical Volume
Manager) back-end implements block storage as
- LVM logical partitions.
+ LVM logical partitions.
On each host that will house block storage,
an administrator must initially create a
volume group dedicated to Block Storage
@@ -435,7 +434,7 @@ format="SVG" scale="60"/>
manager (LVM) and file system (such as, ext3,
ext4, xfs, btrfs). ZFS has a number of
advantages over ext4, including improved data
- integrity checking.
+ integrity checking.
The ZFS back-end for OpenStack Block Storage
only supports Solaris-based systems such as
Illumos. While there is a Linux port of ZFS,
@@ -446,7 +445,7 @@ format="SVG" scale="60"/>
hosts on its own, you need to add a
replication solution on top of ZFS if your
cloud needs to be able to handle storage node
- failures.
+ failures.
We don't recommend ZFS unless you have
previous experience with deploying it, since
the ZFS back-end for Block Storage requires a
@@ -526,7 +525,7 @@ format="SVG" scale="60"/>
traffic, which is predominantly "Do you have the
object?"/"Yes I have the object!." Of course, if the
answer to the aforementioned question is negative or times
- out, replication of the object begins.
+ out, replication of the object begins.
Consider the scenario where an entire server fails, and
24 TB of data needs to be transferred "immediately" to
remain at three copies - this can put significant load on
@@ -545,7 +544,7 @@ format="SVG" scale="60"/>
The remaining point on bandwidth is the public facing
portion. swift-proxy is stateless, which means that you
can easily add more and use http load-balancing methods to
- share bandwidth and availability between them.
+ share bandwidth and availability between them.More proxies means more bandwidth, if your storage can
keep up.
diff --git a/doc/openstack-ops/ch_ops_backup_recovery.xml b/doc/openstack-ops/ch_ops_backup_recovery.xml
index 5f8a2b08..68afa5ee 100644
--- a/doc/openstack-ops/ch_ops_backup_recovery.xml
+++ b/doc/openstack-ops/ch_ops_backup_recovery.xml
@@ -5,8 +5,6 @@
-
-
]>
This script dumps the entire MySQL database and delete
@@ -79,7 +77,7 @@ find $backup_dir -ctime +7 -type f -delete
File System Backups
- This section discusses which files and directories should be backed up regularly, organized by service.
+ This section discusses which files and directories should be backed up regularly, organized by service.ComputeThe /etc/nova directory on both the
diff --git a/doc/openstack-ops/ch_ops_customize.xml b/doc/openstack-ops/ch_ops_customize.xml
index d6f7f27f..e519b372 100644
--- a/doc/openstack-ops/ch_ops_customize.xml
+++ b/doc/openstack-ops/ch_ops_customize.xml
@@ -5,8 +5,6 @@
-
-
]>
what you really want to restrict it to is a set of IPs
based on a whitelist.
- This example is for illustrative purposes only. It
+ This example is for illustrative purposes only. It
should not be used as a container IP whitelist
solution without further development and extensive
- security testing.
+ security testing.When you join the screen session that
stack.sh starts with screen -r
@@ -644,7 +642,7 @@ proxy-server IP 198.51.100.12 denied access to Account=AUTH_... Container=None.
pipeline value in the project's
conf or ini configuration
files in /etc/<project> to identify
- projects that use Paste.
+ projects that use Paste.When your middleware is done, we encourage you to open
source it and let the community know on the OpenStack
mailing list. Perhaps others need the same functionality.
@@ -710,7 +708,7 @@ proxy-server IP 198.51.100.12 denied access to Account=AUTH_... Container=None.
This example is for illustrative purposes only. It
should not be used as a scheduler for Nova without
- further development and testing.
+ further development and testing.When you join the screen session that
stack.sh starts with screen -r
diff --git a/doc/openstack-ops/ch_ops_lay_of_land.xml b/doc/openstack-ops/ch_ops_lay_of_land.xml
index 70e3998d..9f0148a8 100644
--- a/doc/openstack-ops/ch_ops_lay_of_land.xml
+++ b/doc/openstack-ops/ch_ops_lay_of_land.xml
@@ -5,8 +5,6 @@
-
-
]>
+ operating system vendor are out of date.The "pip" utility is used to manage package installation
from the PyPI archive and is available in the "python-pip"
package in most Linux distributions. Each OpenStack
@@ -153,25 +151,25 @@
a file called openrc.sh, which looks
something like this:#!/bin/bash
-
+
# With the addition of Keystone, to use an openstack cloud you should
# authenticate against keystone, which returns a **Token** and **Service
# Catalog**. The catalog contains the endpoint for all services the
# user/tenant has access to - including nova, glance, keystone, swift.
#
-# *NOTE*: Using the 2.0 *auth api* does not mean that compute api is 2.0.
+# *NOTE*: Using the 2.0 *auth api* does not mean that compute api is 2.0.
# We use the 1.1 *compute api*
export OS_AUTH_URL=http://203.0.113.10:5000/v2.0
-
+
# With the addition of Keystone we have standardized on the term **tenant**
# as the entity that owns the resources.
export OS_TENANT_ID=98333aba48e756fa8f629c83a818ad57
export OS_TENANT_NAME="test-project"
-
+
# In addition to the owning entity (tenant), openstack stores the entity
# performing the action as the **user**.
export OS_USERNAME=test-user
-
+
# With Keystone you pass the keystone password.
echo "Please enter your OpenStack Password: "
read -s OS_PASSWORD_INPUT
@@ -202,7 +200,7 @@ export OS_PASSWORD=$OS_PASSWORD_INPUT
and pk.pem. The ec2rc.sh is similar to
this:
#!/bin/bash
-
+
NOVARC=$(readlink -f "${BASH_SOURCE:-${0}}" 2>/dev/null) ||\
NOVARC=$(python -c 'import os,sys; \
print os.path.abspath(os.path.realpath(sys.argv[1]))' "${BASH_SOURCE:-${0}}")
@@ -214,7 +212,7 @@ export EC2_USER_ID=42 # nova does not use user id, but bundling requires it
export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem
export EC2_CERT=${NOVA_KEY_DIR}/cert.pem
export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem
-export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this
+export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this
alias ec2-bundle-image="ec2-bundle-image --cert $EC2_CERT --privatekey \
$EC2_PRIVATE_KEY --user 42 --ec2cert $NOVA_CERT"
@@ -243,7 +241,7 @@ $EC2_SECRET_KEY --url $S3_URL --ec2cert $NOVA_CERT"
>bug report
(https://bugs.launchpad.net/python-novaclient/+bug/1020238)
which has been open, closed as invalid, and reopened
- through a few cycles.
+ through a few cycles.
The issue is that under some conditions the command
line tools try to use a Python keyring as a credential
cache and, under a subset of those conditions, another
@@ -277,7 +275,7 @@ $EC2_SECRET_KEY --url $S3_URL --ec2cert $NOVA_CERT"
The first thing you must do is authenticate with
the cloud using your credentials to get an
- authentication token.
+ authentication token.Your credentials are a combination of username,
password, and tenant (project). You can extract
these values from the openrc.sh
@@ -409,7 +407,7 @@ cloud.example.com nova
+-----+----------+----------+----------------------------+
The output above shows that there are five services
configured.
- To see the endpoint of each service, run:
+ To see the endpoint of each service, run:$ keystone endpoint-list---+------------------------------------------+--
| publicurl |
@@ -570,7 +568,7 @@ None 1.2.3.5 48a415e7-6f07-4d33-ad00-814e60b010ff no
Sometimes a user and a group have a one-to-one
mapping. This happens for standard system accounts,
such as cinder, glance, nova, and swift, or when only
- one user is ever part of a group.
+ one user is ever part of a group.
diff --git a/doc/openstack-ops/ch_ops_log_monitor.xml b/doc/openstack-ops/ch_ops_log_monitor.xml
index b91e230b..bf99afd9 100644
--- a/doc/openstack-ops/ch_ops_log_monitor.xml
+++ b/doc/openstack-ops/ch_ops_log_monitor.xml
@@ -5,8 +5,6 @@
-
-
]>
The first step in finding the source of an error is
typically to search for a CRITICAL, TRACE, or ERROR
- message in the log starting at the bottom of the log file.
+ message in the log starting at the bottom of the log file.
An example of a CRITICAL log message, with the
corresponding TRACE (Python traceback) immediately
following:
@@ -571,7 +569,7 @@ root 24121 0.0 0.0 11688 912 pts/5 S+ 13:07 0:00 grep nova-api
+-----------------------------------+------------+------------+---------------+
The above was generated using a custom script which
can be found on GitHub
- (https://github.com/cybera/novac/blob/dev/libexec/novac-quota-report).
+ (https://github.com/cybera/novac/blob/dev/libexec/novac-quota-report).
This script is specific to a certain OpenStack
installation and must be modified to fit your
diff --git a/doc/openstack-ops/ch_ops_maintenance.xml b/doc/openstack-ops/ch_ops_maintenance.xml
index 3fca1dd3..bc2f5598 100644
--- a/doc/openstack-ops/ch_ops_maintenance.xml
+++ b/doc/openstack-ops/ch_ops_maintenance.xml
@@ -5,8 +5,6 @@
-
-
]>
# nova reboot <uuid>
- Any time an instance shuts down unexpectedly,
+ Any time an instance shuts down unexpectedly,
it might have problems on boot. For example, the
instance might require an fsck on the
root partition. If this happens, the user can use
@@ -261,9 +259,9 @@
Id Name State
----------------------------------
1 instance-00000981 running
-2 instance-000009f5 running
+2 instance-000009f5 running
30 instance-0000274a running
-
+
root@compute-node:~# virsh suspend 30
Domain 30 suspended
@@ -276,7 +274,7 @@ total 33M
-rw-r--r-- 1 libvirt-qemu kvm 33M Oct 15 22:06 disk
-rw-r--r-- 1 libvirt-qemu kvm 384K Oct 15 22:06 disk.local
-rw-rw-r-- 1 nova nova 1.7K Oct 15 11:30 libvirt.xml
-root@compute-node:/var/lib/nova/instances/instance-0000274a# qemu-nbd -c /dev/nbd0 `pwd`/disk
+root@compute-node:/var/lib/nova/instances/instance-0000274a# qemu-nbd -c /dev/nbd0 `pwd`/disk
Mount the qemu-nbd device.
@@ -336,7 +334,7 @@ Id Name State
1 instance-00000981 running
2 instance-000009f5 running
30 instance-0000274a paused
-
+
root@compute-node:/var/lib/nova/instances/instance-0000274a# virsh resume 30
Domain 30 resumed
@@ -348,9 +346,9 @@ Domain 30 resumed
If the affected instances also had attached volumes,
first generate a list of instance and volume
UUIDs:
- mysql> select nova.instances.uuid as instance_uuid, cinder.volumes.id as volume_uuid, cinder.volumes.status,
+ mysql> select nova.instances.uuid as instance_uuid, cinder.volumes.id as volume_uuid, cinder.volumes.status,
cinder.volumes.attach_status, cinder.volumes.mountpoint, cinder.volumes.display_name from cinder.volumes
-inner join nova.instances on cinder.volumes.instance_uuid=nova.instances.uuid
+inner join nova.instances on cinder.volumes.instance_uuid=nova.instances.uuid
where nova.instances.host = 'c01.example.com';You should see a result like the following:
@@ -460,13 +458,13 @@ inner join nova.instances on cinder.volumes.instance_uuid=nova.instances.uuid
# swift-ring-builder object.builder remove <ip address of storage node>
# swift-ring-builder account.builder rebalance
# swift-ring-builder container.builder rebalance
-# swift-ring-builder object.builder rebalance
+# swift-ring-builder object.builder rebalanceNext, redistribute the ring files to the other
nodes:# for i in s01.example.com s02.example.com s03.example.com
> do
> scp *.ring.gz $i:/etc/swift
-> done
+> done
These actions effectively take the storage node out
of the storage cluster.When the node is able to rejoin the cluster, just
@@ -751,13 +749,13 @@ inner join nova.instances on cinder.volumes.instance_uuid=nova.instances.uuid
each OpenStack component accesses its corresponding
database. Look for either sql_connection
or simply connection:
- # grep -hE "connection ?=" /etc/nova/nova.conf /etc/glance/glance-*.conf
+ # grep -hE "connection ?=" /etc/nova/nova.conf /etc/glance/glance-*.conf
/etc/cinder/cinder.conf /etc/keystone/keystone.conf
- sql_connection = mysql://nova:nova@cloud.alberta.sandbox.cybera.ca/nova
- sql_connection = mysql://glance:password@cloud.example.com/glance
- sql_connection = mysql://glance:password@cloud.example.com/glance
- sql_connection=mysql://cinder:password@cloud.example.com/cinder
- connection = mysql://keystone_admin:password@cloud.example.com/keystone
+ sql_connection = mysql://nova:nova@cloud.alberta.sandbox.cybera.ca/nova
+ sql_connection = mysql://glance:password@cloud.example.com/glance
+ sql_connection = mysql://glance:password@cloud.example.com/glance
+ sql_connection=mysql://cinder:password@cloud.example.com/cinder
+ connection = mysql://keystone_admin:password@cloud.example.com/keystoneThe connection strings take this format:mysql:// <username> : <password> @ <hostname> / <database name>
@@ -957,7 +955,7 @@ inner join nova.instances on cinder.volumes.instance_uuid=nova.instances.uuid
that a certain instance was unable to be started. This
ended up being a red herring because the instance was
simply the first instance in alphabetical order, so it
- was the first instance that nova-compute would touch.
+ was the first instance that nova-compute would touch.
Further troubleshooting showed that libvirt was not
running at all. This made more sense. If libvirt
wasn't running, then no instance could be virtualized
@@ -1081,7 +1079,7 @@ inner join nova.instances on cinder.volumes.instance_uuid=nova.instances.uuid
Uninstalling
- While we'd always recommend using your automated
+ While we'd always recommend using your automated
deployment system to re-install systems from scratch,
sometimes you do need to remove OpenStack from a system
the hard way. Here's how:
@@ -1092,10 +1090,10 @@ inner join nova.instances on cinder.volumes.instance_uuid=nova.instances.uuid
These steps depend on your underlying distribution,
but in general you should be looking for 'purge' commands
- in your package manager, like aptitude purge ~c $package.
+ in your package manager, like aptitude purge ~c $package.
Following this, you can look for orphaned files in the
directories referenced throughout this guide. For uninstalling
the database properly, refer to the manual appropriate for
- the product in use.
-
+ the product in use.
+
diff --git a/doc/openstack-ops/ch_ops_network_troubleshooting.xml b/doc/openstack-ops/ch_ops_network_troubleshooting.xml
index 582c4702..89782c5b 100644
--- a/doc/openstack-ops/ch_ops_network_troubleshooting.xml
+++ b/doc/openstack-ops/ch_ops_network_troubleshooting.xml
@@ -5,8 +5,6 @@
-
-
]>
$ ip a | grep state
-1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
+1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br100 state UP qlen 1000
-4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
+4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
6: br100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UPYou can safely ignore the state of virbr0, which is a
default bridge created by libvirt and not used by
@@ -58,7 +56,7 @@
The instance generates a packet and places it on
the virtual NIC inside the instance, such as,
- eth0.
+ eth0.The packet transfers to the virtual NIC of the
@@ -72,9 +70,9 @@
bridge on the compute node, such as,
br100.
- If you run FlatDHCPManager, one bridge is on
+ If you run FlatDHCPManager, one bridge is on
the compute node. If you run VlanManager, one
- bridge exists for each VLAN.
+ bridge exists for each VLAN.To see which bridge the packet will use, run the
command:
$ brctl show
@@ -160,23 +158,23 @@
addresses:DWC: Check formatting of the following:
- Instance
- 10.0.2.24
- 203.0.113.30
- Compute Node
- 10.0.0.42
- 203.0.113.34
- External Server
- 1.2.3.4
-
+ Instance
+ 10.0.2.24
+ 203.0.113.30
+ Compute Node
+ 10.0.0.42
+ 203.0.113.34
+ External Server
+ 1.2.3.4
+
Next, open a new shell to the instance and then ping the
external host where tcpdump is running. If the network
path to the external server and back is fully functional,
you see something like the following:On the external server:12:51:42.020227 IP (tos 0x0, ttl 61, id 0, offset 0, flags [DF], proto ICMP (1), length 84)
- 203.0.113.30 > 1.2.3.4: ICMP echo request, id 24895, seq 1, length 64
-12:51:42.020255 IP (tos 0x0, ttl 64, id 8137, offset 0, flags [none], proto ICMP (1), length 84)
+ 203.0.113.30 > 1.2.3.4: ICMP echo request, id 24895, seq 1, length 64
+12:51:42.020255 IP (tos 0x0, ttl 64, id 8137, offset 0, flags [none], proto ICMP (1), length 84)
1.2.3.4 > 203.0.113.30: ICMP echo reply, id 24895, seq 1, length 64On the Compute Node:12:51:42.019519 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84)
@@ -221,19 +219,19 @@
networking information:
- fixed_ips: contains each possible IP address
+ fixed_ips: contains each possible IP address
for the subnet(s) added to Nova. This table is
related to the instances table by way of the
fixed_ips.instance_uuid column.
- floating_ips: contains each floating IP address
+ floating_ips: contains each floating IP address
that was added to nova. This table is related to
the fixed_ips table by way of the
floating_ips.fixed_ip_id column.
- instances: not entirely network specific, but
+ instances: not entirely network specific, but
it contains information about the instance that is
utilizing the fixed_ip and optional
floating_ip.
@@ -307,12 +305,12 @@ wget: can't connect to remote host (169.254.169.254): Network is unreachableSeveral minutes after nova-network is restarted, you
should see new dnsmasq processes running:# ps aux | grep dnsmasq
-nobody 3735 0.0 0.0 27540 1044 ? S 15:40 0:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file=
- --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1
+nobody 3735 0.0 0.0 27540 1044 ? S 15:40 0:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file=
+ --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1
--except-interface=lo --dhcp-range=set:'novanetwork',192.168.100.2,static,120s --dhcp-lease-max=256
--dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro
-root 3736 0.0 0.0 27512 444 ? S 15:40 0:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file=
- --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1
+root 3736 0.0 0.0 27512 444 ? S 15:40 0:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file=
+ --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1
--except-interface=lo --dhcp-range=set:'novanetwork',192.168.100.2,static,120s --dhcp-lease-max=256
--dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-roIf your instances are still not able to obtain IP
@@ -323,9 +321,9 @@ root 3736 0.0 0.0 27512 444 ? S 15:40 0:00 /usr/sbin/dnsmasq --strict-order --bi
see the dnsmasq output. If dnsmasq is seeing the request
properly and handing out an IP, the output looks
like:
- Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPDISCOVER(br100) fa:16:3e:56:0b:6f
-Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPOFFER(br100) 192.168.100.3 fa:16:3e:56:0b:6f
-Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPREQUEST(br100) 192.168.100.3 fa:16:3e:56:0b:6f
+ Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPDISCOVER(br100) fa:16:3e:56:0b:6f
+Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPOFFER(br100) 192.168.100.3 fa:16:3e:56:0b:6f
+Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPREQUEST(br100) 192.168.100.3 fa:16:3e:56:0b:6f
Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPACK(br100) 192.168.100.3 fa:16:3e:56:0b:6f testIf you do not see the DHCPDISCOVER, a problem exists
with the packet getting from the instance to the machine
@@ -352,11 +350,11 @@ Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPACK(br100) 192.168.100.3 fa:16:3e
--dhcp-lease-max=253 --dhcp-no-override
nobody 2438 0.0 0.0 27540 1096 ? S Feb26 0:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file=
--domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1
- --except-interface=lo --dhcp-range=set:'novanetwork',192.168.100.2,static,120s --dhcp-lease-max=256
+ --except-interface=lo --dhcp-range=set:'novanetwork',192.168.100.2,static,120s --dhcp-lease-max=256
--dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro
-root 2439 0.0 0.0 27512 472 ? S Feb26 0:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file=
- --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1
- --except-interface=lo --dhcp-range=set:'novanetwork',192.168.100.2,static,120s --dhcp-lease-max=256
+root 2439 0.0 0.0 27512 472 ? S Feb26 0:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file=
+ --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1
+ --except-interface=lo --dhcp-range=set:'novanetwork',192.168.100.2,static,120s --dhcp-lease-max=256
--dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-roIf the problem does not seem to be related to dnsmasq
itself, at this point, use tcpdump on the interfaces to
@@ -387,8 +385,8 @@ root 2439 0.0 0.0 27512 472 ? S Feb26 0:00 /usr/sbin/dnsmasq --strict-order --bi
When debugging DNS issues, start by making sure the host
where the dnsmasq process for that instance runs is able
to correctly resolve. If the host cannot resolve, then the
- instances won't be able either.
- A quick way to check if DNS is working is to
+ instances won't be able either.
+ A quick way to check if DNS is working is to
resolve a hostname inside your instance using the
host command. If DNS is working, you
should see:
diff --git a/doc/openstack-ops/ch_ops_projects_users.xml b/doc/openstack-ops/ch_ops_projects_users.xml
index 5576e6c7..c79d86b6 100644
--- a/doc/openstack-ops/ch_ops_projects_users.xml
+++ b/doc/openstack-ops/ch_ops_projects_users.xml
@@ -692,7 +692,7 @@
purposefully show and administrative user where this value
is "admin".The "admin" is global not per project so granting a user the
admin role in any project gives the administrative
- rights across the whole cloud.
+ rights across the whole cloud.
Typical use is to only create administrative users in a
single project, by convention the "admin" project which is
created by default during cloud setup. If your
diff --git a/doc/openstack-ops/ch_ops_resources.xml b/doc/openstack-ops/ch_ops_resources.xml
index 69e598bd..65eb21e8 100644
--- a/doc/openstack-ops/ch_ops_resources.xml
+++ b/doc/openstack-ops/ch_ops_resources.xml
@@ -47,13 +47,13 @@
NIST Cloud Computing Definition
- (http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf)
+ (http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf)
Python
Dive Into
- Python (http://www.diveintopython.net)
+ Python (http://www.diveintopython.net)
Networking
@@ -61,33 +61,33 @@
TCP/IP Illustrated
- (http://www.pearsonhighered.com/educator/product/TCPIP-Illustrated-Volume-1-The-Protocols/9780321336316.page)
+ (http://www.pearsonhighered.com/educator/product/TCPIP-Illustrated-Volume-1-The-Protocols/9780321336316.page)
The TCP/IP
- Guide (http://nostarch.com/tcpip.htm)
+ Guide (http://nostarch.com/tcpip.htm)
A
tcpdump Tutorial and Primer
- (http://danielmiessler.com/study/tcpdump/)
+ (http://danielmiessler.com/study/tcpdump/)
Systems administration
UNIX and Linux
Systems Administration Handbook
- (http://www.admin.com/)
+ (http://www.admin.com/)
Virtualization
The Book of
- Xen (http://nostarch.com/xen.htm)
+ Xen (http://nostarch.com/xen.htm)
Configuration management
Puppet Labs
- Documentation (http://docs.puppetlabs.com/)
+ Documentation (http://docs.puppetlabs.com/)
Pro
Puppet (http://www.apress.com/9781430230571)
diff --git a/doc/openstack-ops/ch_ops_upstream.xml b/doc/openstack-ops/ch_ops_upstream.xml
index 1c321b3b..d0f074e7 100644
--- a/doc/openstack-ops/ch_ops_upstream.xml
+++ b/doc/openstack-ops/ch_ops_upstream.xml
@@ -5,8 +5,6 @@
-
-
]>
How to Contribute to the DocumentationOpenStack documentation efforts encompass operator and
- administrator docs, API docs, and user docs.
+ administrator docs, API docs, and user docs.The genesis of this book was an in-person event, but now
that the book is in your hands we want you to contribute
to it. OpenStack documentation follows the coding
diff --git a/doc/openstack-ops/ch_ops_user_facing.xml b/doc/openstack-ops/ch_ops_user_facing.xml
index 7db2be34..eb17ec2c 100644
--- a/doc/openstack-ops/ch_ops_user_facing.xml
+++ b/doc/openstack-ops/ch_ops_user_facing.xml
@@ -5,8 +5,6 @@
-
-
]>
Run the following command to view the properties of
- existing images:
+ existing images:$ glance details
@@ -106,7 +104,6 @@
-
FlavorsVirtual hardware templates are called "flavors" in OpenStack, defining sizes for RAM,
@@ -251,7 +248,6 @@
modify a flavor by deleting an existing flavor and creating a new one with the same
name.
-
@@ -307,12 +303,12 @@
$ nova secgroup-list-rules open
- +-------------+-----------+---------+-----------+--------------+
- | IP Protocol | From Port | To Port | IP Range | Source Group |
- +-------------+-----------+---------+-----------+--------------+
- | icmp | -1 | 255 | 0.0.0.0/0 | |
- | tcp | 1 | 65535 | 0.0.0.0/0 | |
- | udp | 1 | 65535 | 0.0.0.0/0 | |
+ +-------------+-----------+---------+-----------+--------------+
+ | IP Protocol | From Port | To Port | IP Range | Source Group |
+ +-------------+-----------+---------+-----------+--------------+
+ | icmp | -1 | 255 | 0.0.0.0/0 | |
+ | tcp | 1 | 65535 | 0.0.0.0/0 | |
+ | udp | 1 | 65535 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+ These rules are all "allow" type rules as the default is
deny. The first column is the IP protocol (one of icmp,
@@ -378,20 +374,20 @@ $ nova secgroup-add-rule global_http tcp 80 80 0.0.0.0/0
The inverse operation is called secgroup-delete-rule,
using the same format. Whole security groups can be
removed with secgroup-delete.
- To create security group rules for a cluster of
- instances:
+ To create security group rules for a cluster of
+ instances:SourceGroups are a special dynamic way of defining the
CIDR of allowed sources. The user specifies a SourceGroup
(Security Group name), all the users' other Instances
using the specified SourceGroup are selected dynamically.
This alleviates the need for a individual rules to allow
- each new member of the cluster.usage:
+ each new member of the cluster.usage:usage: nova secgroup-add-group-rule <secgroup>
<source-group> <ip-proto> <from-port>
<to-port> $ nova secgroup-add-group-rule cluster global-http tcp 22 22
- The "cluster" rule allows ssh access from any other
- instance that uses the "global-http" group.
+ The "cluster" rule allows ssh access from any other
+ instance that uses the "global-http" group.
@@ -466,7 +462,7 @@ Optional snapshot description. (Default=None)
volume's UUID. First try the log files on the cloud
controller and then try the storage node where they
volume was attempted to be created:
- # grep 903b85d0-bacc-4855-a261-10843fc2d65b /var/log/cinder/*.log
+ # grep 903b85d0-bacc-4855-a261-10843fc2d65b /var/log/cinder/*.log
@@ -592,7 +588,7 @@ Optional snapshot description. (Default=None)
$ nova keypair-add --pub-key mykey.pub mykeyYou must have the matching private key to access
instances associated with this key.
- To associate a key with an instance on boot add
+ To associate a key with an instance on boot add
--key_name mykey to your command line for
example:$ nova boot --image ubuntu-cloudimage --flavor 1 --key_name mykey
@@ -649,7 +645,7 @@ Optional snapshot description. (Default=None)
system and then passed in at instance creation with
the flag --user-data <user-data-file> for
example:
- $ nova boot --image ubuntu-cloudimage --flavor 1 --user-data mydata.file
+ $ nova boot --image ubuntu-cloudimage --flavor 1 --user-data mydata.fileArbitrary local files can also be placed into the
instance file system at creation time using the --file
<dst-path=src-path> option. You may store up to
@@ -709,9 +705,9 @@ Optional snapshot description. (Default=None)
You can attach block storage to instances from the
dashboard on the Volumes page. Click
the Edit Attachments action next to
- the volume you wish to attach.
+ the volume you wish to attach.To perform this action from command line, run the
- following command:
+ following command:
$ nova volume-attach <server> <volume> You can also specify block device mapping at instance
boot time through the nova command-line client, as
@@ -797,7 +793,7 @@ Optional snapshot description. (Default=None)
xlink:href="https://bugs.launchpad.net/nova/+bug/1163566"
>1163566
(https://bugs.launchpad.net/nova/+bug/1163566) you must
- specify an image when booting from a volume in Horizon,
+ specify an image when booting from a volume in Horizon,
even though this image is not used.To boot normally from an image and attach block storage,
map to a device other than vda.
diff --git a/doc/openstack-ops/preface_ops.xml b/doc/openstack-ops/preface_ops.xml
index 27478022..2b44eb07 100644
--- a/doc/openstack-ops/preface_ops.xml
+++ b/doc/openstack-ops/preface_ops.xml
@@ -25,7 +25,7 @@
computing platform that meets the needs of public and private cloud
providers regardless of size. OpenStack services control large pools
of compute, storage, and networking resources throughout a data
- center.
+ center.
Each service provides a REST API so that all these resources can
be managed through a dashboard that gives administrators control
while empowering users to provision resources through a web
@@ -71,7 +71,7 @@
Linux machines for networking. You must install and maintain a
MySQL database, and occasionally run SQL queries against it.
- One of the most complex aspects of an OpenStack cloud is the
+ One of the most complex aspects of an OpenStack cloud is the
networking configuration. You should be familiar with concepts such
as DHCP, Linux bridges, VLANs, and iptables. You must also have
access to a network hardware expert who can configure the switches
@@ -121,7 +121,7 @@
users, give them quotas to parcel out resources, and so on.Chapter 10: User-facing Operations: This chapter moves along to
show you how to use OpenStack cloud resources and train your users
- as well.
+ as well.Chapter 11: Maintenance, Failures, and Debugging: This chapter
goes into the common failures the authors have seen while running
clouds in production, including troubleshooting.
@@ -132,10 +132,10 @@
debugging related services like DHCP and DNS.
Chapter 13: Logging and Monitoring: This chapter shows you where
OpenStack places logs and how to best to read and manage logs for
- monitoring purposes.
+ monitoring purposes.
Chapter 14: Backup and Recovery: This chapter describes what you
need to back up within OpenStack as well as best practices for
- recovering backups.
+ recovering backups.
Chapter 15: Customize: When you need to get a specialized feature
into OpenStack, this chapter describes how to use DevStack to write
custom middleware or a custom scheduler to rebalance your
@@ -179,7 +179,7 @@
non-trivial OpenStack cloud. After you read this guide,
you'll know which questions to ask and how to organize
your compute, networking, storage resources, and the
- associated software packages.
+ associated software packages.
Perform the day-to-day tasks required to administer a
@@ -191,7 +191,7 @@
the Book Sprint
site. Your authors cobbled this book together in five
days during February 2013, fueled by caffeine and the best take-out
- food that Austin, Texas could offer.
+ food that Austin, Texas could offer.
On the first day we filled white boards with colorful sticky notes
to start to shape this nebulous book about how to architect and
operate clouds.
@@ -310,7 +310,7 @@
ops-guide tag to indicate that the
bug is in this guide. You can assign the bug to yourself
if you know how to fix it. Also, a member of the OpenStack
- doc-core team can triage the doc bug.
+ doc-core team can triage the doc bug.