operations-guide/doc/openstack-ops/ch_ops_user_facing.xml
Anne Gentle 1e0074366a Edits and markup for user facing operations
- replace <programlisting> with screen and computeroutput
or userinput as needed

Change-Id: Icf44eda9dc6d87ac577baf211711c3f922ebe883
2014-03-03 09:04:49 -06:00

1119 lines
65 KiB
XML

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE chapter [
<!-- Some useful entities borrowed from HTML -->
<!ENTITY ndash "&#x2013;">
<!ENTITY mdash "&#x2014;">
<!ENTITY hellip "&#x2026;">
<!ENTITY plusmn "&#xB1;">
]>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="user_facing_operations">
<title>User-facing Operations</title>
<para>This guide is for OpenStack operators and does not seek to be an exhaustive reference for
users, but as an operator it is important that you have a basic understanding of how to use
the cloud facilities. This chapter looks at OpenStack from a basic user perspective, which
helps you understand your users' needs and determine when you get a trouble ticket whether
it is a user issue or a service issue. The main concepts covered are images, flavors,
security groups, blocks storage, and instances.</para>
<section xml:id="user_facing_images">
<title>Images</title>
<?dbhtml stop-chunking?>
<para>OpenStack images can often be thought of as "virtual machine templates." Images can
also be standard installation media like ISO images. Essentially, they contain bootable
file systems that are used to launch instances.</para>
<section xml:id="add_images">
<title>Adding Images</title>
<para>Several pre-made images exist and can easily be imported into the Image Service. A
common image to add is the CirrOS image, which is very small and used for testing
purposes. To add this image, simply do:</para>
<screen><prompt>$</prompt> <userinput>wget http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img</userinput>
<prompt>$</prompt> <userinput>glance image-create --name='cirros image' --is-public=true \
--container-format=bare --disk-format=qcow2 &lt; cirros-0.3.1-x86_64-disk.img</userinput></screen>
<para>The <code>glance image-create</code> command
provides a large set of options to give your image.
For example, the <code>min-disk</code> option is
useful for images that require root disks of a certain
size (for example, large Windows images). To view
these options, do:</para>
<screen><prompt>$</prompt> <userinput>glance help image-create</userinput></screen>
<para>The <code>location</code> option is important to
note. It does not copy the entire image into Glance,
but reference an original location to where the image
can be found. Upon launching an instance of that
image, Glance accesses the image from the location
specified.</para>
<para>The <code>copy-from</code> option copies the image
from the location specified into the
<code>/var/lib/glance/images</code> directory. The
same thing is done when using the STDIN redirection
such as shown in the example.</para>
<para>Run the following command to view the properties of
existing images:</para>
<screen><prompt>$</prompt> <userinput>glance details</userinput></screen>
</section>
<section xml:id="sharing_images">
<title>Sharing Images Between Projects</title>
<para>In a multi-tenant cloud environment, users will sometimes want
to share their personal images or snapshots with other projects.
This can be done on the command line with the <command>glance</command> tool by
the owner of the image.</para>
<para>To share an image or snapshot with another project, do the
following:</para>
<procedure>
<step>
<para>Obtain the UUID of the image:</para>
<screen><prompt>$</prompt> <userinput>glance image-list</userinput></screen>
</step>
<step>
<para>Obtain the UUID of the project with which you want to
share your image. Unfortunately non-admin users are
unable to use the <command>keystone</command> command to do this. The
easiest solution is to either obtain the UUID from an
administrator of the cloud or from a user located in the
project.</para>
</step>
<step>
<para>Once you have both pieces of information, run the
glance command:</para>
<screen><prompt>$</prompt> <userinput>glance member-create &lt;image-uuid&gt; &lt;project-uuid&gt;</userinput></screen>
<para>For example:</para>
<screen><prompt>$</prompt> <userinput>glance member-create 733d1c44-a2ea-414b-aca7-69decf20d810 \
771ed149ef7e4b2b88665cc1c98f77ca</userinput></screen>
<para>Project 771ed149ef7e4b2b88665cc1c98f77ca
will now have access to image
733d1c44-a2ea-414b-aca7-69decf20d810.</para>
</step>
</procedure>
</section>
<section xml:id="delete_images">
<title>Deleting Images</title>
<para>To delete an image, just execute:</para>
<screen><prompt>$</prompt> <userinput>glance
image-delete &lt;image uuid&gt;</userinput></screen>
<note>
<para>Deleting an image does not affect instances or
snapshots that were based off the image.</para>
</note>
</section>
<section xml:id="other_cli">
<title>Other CLI Options</title>
<para>A full set of options can be found using:</para>
<screen><prompt>$</prompt> <userinput>glance help</userinput></screen>
<para>or the <link
xlink:href="http://docs.openstack.org/cli/quick-start/content/glance-cli-reference.html"
>OpenStack Image Service</link> CLI Guide.
(http://docs.openstack.org/cli/quick-start/content/glance-cli-reference.html)</para>
</section>
<section xml:id="image_service_and_database">
<title>The Image Service and the Database</title>
<para>The only thing that Glance does not store in a
database is the image itself. The Glance database has
two main tables:</para>
<itemizedlist role="compact">
<listitem>
<para>images</para>
</listitem>
<listitem>
<para>image_properties</para>
</listitem>
</itemizedlist>
<para>Working directly with the database and SQL queries
can provide you with custom lists and reports of
Glance images. Technically, you can update properties
about images through the database, although this is
not generally recommended.</para>
</section>
<section xml:id="sample_image_database">
<title>Example Image Service Database Queries</title>
<para>One interesting example is modifying the table of
images and the owner of that image. This can be easily
done if you simply display the unique ID of the owner,
this example goes one step further and displays the
readable name of the owner:</para>
<screen><prompt>$</prompt> <userinput>mysql&gt; select glance.images.id,
glance.images.name, keystone.tenant.name, is_public from
glance.images inner join keystone.tenant on
glance.images.owner=keystone.tenant.id;</userinput></screen>
<para>Another example is displaying all properties for a
certain image:</para>
<screen><prompt>$</prompt> <userinput>mysql&gt; select name, value from
image_properties where id = &lt;image_id&gt;</userinput></screen>
</section>
</section>
<section xml:id="flavors">
<title>Flavors</title>
<para>Virtual hardware templates are called "flavors" in OpenStack, defining sizes for RAM,
disk, number of cores, and so on. The default install provides five flavors. These are
configurable by admin users (the rights may also be delegated to other users by
redefining the access controls for <code>compute_extension:flavormanage</code> in
<code>/etc/nova/policy.json</code> on the <code>nova-api</code> server). To get the
list of available flavors on your system, run:</para>
<screen><prompt>$</prompt> <userinput>nova flavor-list</userinput></screen>
<screen><computeroutput>+----+-----------+-----------+------+-----------+\+-------+-\+-------------+
| ID | Name | Memory_MB | Disk | Ephemeral |/| VCPUs | /| extra_specs |
+----+-----------+-----------+------+-----------+\+-------+-\+-------------+
| 1 | m1.tiny | 512 | 1 | 0 |/| 1 | /| {} |
| 2 | m1.small | 2048 | 10 | 20 |\| 1 | \| {} |
| 3 | m1.medium | 4096 | 10 | 40 |/| 2 | /| {} |
| 4 | m1.large | 8192 | 10 | 80 |\| 4 | \| {} |
| 5 | m1.xlarge | 16384 | 10 | 160 |/| 8 | /| {} |
+----+-----------+-----------+------+-----------+\+-------+-\+-------------+</computeroutput></screen>
<para>The <code>nova flavor-create</code> command allows authorized users to create new flavors. Additional flavor manipulation commands can be shown with the command: <screen><prompt>$</prompt> <userinput>nova help | grep flavor</userinput></screen>
</para>
<para>Flavors define a number of parameters, resulting in the
user having a choice of what type of virtual machine to run - just
like they would have if they were purchasing a physical server.
The following table contains the elements that can be set. Note
in particular <literal>extra_specs</literal> which can be used to
define free-form characteristics, giving a lot of flexibility beyond
just size of RAM, CPU and Disk.</para>
<informaltable rules="all">
<col width="25%"/>
<col width="75%"/>
<tbody>
<tr>
<td>
<para>
<emphasis role="bold">Column</emphasis>
</para>
</td>
<td>
<para>
<emphasis role="bold">Description</emphasis>
</para>
</td>
</tr>
<tr>
<td>
<para>ID</para>
</td>
<td>
<para>A unique numeric id.</para>
</td>
</tr>
<tr>
<td>
<para>Name</para>
</td>
<td>
<para>A descriptive name, such as xx.size_name is conventional but not
required, though some third-party tools may rely on it.</para>
</td>
</tr>
<tr>
<td>
<para>Memory_MB</para>
</td>
<td>
<para>Virtual machine memory in megabytes.</para>
</td>
</tr>
<tr>
<td>
<para>Disk</para>
</td>
<td>
<para>Virtual root disk size in gigabytes. This is an ephemeral disk the
base image is copied into. You don't use it when you boot from a
persistent volume. The "0" size is a special case that uses the native
base image size as the size of the ephemeral root volume.</para>
</td>
</tr>
<tr>
<td>
<para>Ephemeral</para>
</td>
<td>
<para>Specifies the size of a secondary ephemeral data disk. This is an
empty, unformatted disk and exists only for the life of the
instance.</para>
</td>
</tr>
<tr>
<td>
<para>Swap</para>
</td>
<td>
<para>Optional swap space allocation for the instance.</para>
</td>
</tr>
<tr>
<td>
<para>VCPUs</para>
</td>
<td>
<para>Number of virtual CPUs presented to the instance.</para>
</td>
</tr>
<tr>
<td>
<para>RXTX_Factor</para>
</td>
<td>
<para>Optional property allows created servers to have a different bandwidth
cap than that defined in the network they are attached to. This factor
is multiplied by the rxtx_base property of the network. Default value is
1.0 (that is, the same as the attached network).</para>
</td>
</tr>
<tr>
<td>
<para>Is_Public</para>
</td>
<td>
<para>Boolean value, whether flavor is available to all users or private to
the tenant it was created in. Defaults to True.</para>
</td>
</tr>
<tr>
<td>
<para>extra_specs</para>
</td>
<td>
<para>Additional optional restrictions on which compute nodes the flavor can
run on. This is implemented as key/value pairs that must match against
the corresponding key/value pairs on compute nodes. Can be used to
implement things like special resources (such as flavors that can only
run on compute nodes with GPU hardware).</para>
</td>
</tr>
</tbody>
</informaltable>
<section xml:id="private-flavors">
<title>Private Flavors</title>
<para>A user might need a custom flavor that is uniquely-tuned for a
project they are working on. For example, the user might require
128 GB of memory. If you create a new flavor as described above,
the user would have access to the custom flavor, but so will all
other tenants in your cloud. Sometimes this sharing isn't desirable. In
this scenario, allowing all users to have access to a flavor
with 128 GB of memory might cause your cloud to reach full
capacity very quickly. To prevent this, you can restrict access
to the custom flavor using the <command>nova</command> command:</para>
<screen><prompt>$</prompt> <userinput>nova flavor-access-add &lt;flavor-id&gt; &lt;project-id&gt;</userinput></screen>
<para>To view a flavor's access list, do the following:</para>
<screen><prompt>$</prompt> <userinput>nova flavor-access-list &lt;flavor-id&gt;</userinput></screen>
<note>
<title>Best practices</title>
<para>Once access to a flavor has been restricted, no other
projects besides the ones granted explicit access will be
able to see the flavor. This includes the admin project.
Make sure to add the admin project in addition to the
original project.</para>
<para>It's also helpful to allocate a specific numeric range for
custom and private flavors. On UNIX-based systems,
non-system accounts usually have a UID starting at 500. A
similar approach can be taken with custom flavors. This
helps you easily identify what flavors are custom, private,
and public for the entire cloud.</para>
</note>
</section>
<simplesect>
<title>How do I modify an existing flavor?</title>
<para>The OpenStack dashboard simulates the ability to modify a
flavor by deleting an existing flavor and creating a new one
with the same name.</para>
</simplesect>
</section>
<?hard-pagebreak?>
<section xml:id="security_groups">
<?dbhtml stop-chunking?>
<title>Security groups</title>
<para>A common new-user issue with OpenStack is failing to set an appropriate security group
when launching an instance. As a result, the user is unable to contact the instance on
the network.</para>
<para>Security groups are sets of IP filter rules that are
applied to an instance's networking. They are project
specific and project members can edit the default rules
for their group and add new rules sets. All projects have
a "default" security group which is applied to instances
which have no other security group defined. Unless changed,
this security group denies all incoming traffic.</para>
<section xml:id="general-security-group-config">
<title>General Security Groups Configuration</title>
<para>The <code>nova.conf</code> option
<code>allow_same_net_traffic</code> (which defaults to
true) globally controls whether the rules applies to hosts
which share a network. When set to true, hosts on the same
subnet are not filtered and are allowed to pass all types
of traffic between them. On a flat network, this allows
all instances from all projects unfiltered communication.
With VLAN networking, this allows access between instances
within the same project. If
<code>allow_same_net_traffic</code> is set to false,
security groups are enforced for all connections, in this
case it is possible for projects to simulate the
<code>allow_same_net_traffic</code> by configuring
their default security group to allow all traffic from
their subnet.</para>
<tip><para>As noted in the previous chapter the number of rules per
security group is controlled by the
<code>quota_security_group_rules</code> and the number of allowed
security groups per project is controlled by the
<code>quota_security_groups</code> quota.</para></tip>
</section>
<section xml:id="end-user-config-sec-group">
<title>End User Configuration of Security Groups</title>
<para>Security groups for the current project can be found on
the OpenStack dashboard under "Access &amp; Security". To see
details of an existing group select the "edit" action for
that security group. Obviously modifying existing groups
can be done from this "edit" interface. There is a "Create
Security Group" button on the main "Access &amp; Security"
page for creating new groups. We discuss the terms used in
these fields when we explain the command line
equivalents.</para>
<para>From the command line you can get a list of security
groups for the project you're acting in using the nova
command:</para>
<screen><prompt>$</prompt> <userinput>nova secgroup-list</userinput><computeroutput>+---------+-------------+
| Name | Description |
+---------+-------------+
| default | default |
| open | all ports |
+---------+-------------+</computeroutput></screen>
<para>To view the details of the "open" security group:</para>
<screen><prompt>$</prompt> <userinput>nova secgroup-list-rules open</userinput><computeroutput>+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp | -1 | 255 | 0.0.0.0/0 | |
| tcp | 1 | 65535 | 0.0.0.0/0 | |
| udp | 1 | 65535 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+</computeroutput></screen>
<para>These rules are all "allow" type rules as the default is
deny. The first column is the IP protocol (one of icmp,
tcp, or udp) the second and third columns specify the
affected port range. The fourth column specifies the IP
range in CIDR format. This example shows the full port
range for all protocols allowed from all IPs.</para>
<para>When adding a new security group you should pick a
descriptive but brief name. This name shows up in brief
descriptions of the instances that use it where the longer
description field often does not. Seeing that an instance
is using security group "http" is much easier to
understand than "bobs_group" or "secgrp1".</para>
<para>As an example, let's create a security group that allows
web traffic anywhere on the internet. We'll call this
"global_http" which is clear and reasonably concise,
encapsulating what is allowed and from where. From the
command line:</para>
<screen><prompt>$</prompt> <userinput>nova secgroup-create
global_http "allow web traffic from the internet"</userinput><computeroutput>+-------------+-------------------------------------+
| Name | Description |
+-------------+-------------------------------------+
| global_http | allow web traffic from the internet |
+-------------+-------------------------------------+</computeroutput></screen>
<para>This creates the empty security group to make it do what
we want we need to add some rules.</para>
<screen><prompt>$</prompt> <userinput>nova secgroup-add-rule &lt;secgroup&gt; &lt;ip-proto&gt; &lt;from-port&gt; &lt;to-port&gt; &lt;cidr&gt;</userinput>
<prompt>$</prompt> <userinput>nova secgroup-add-rule global_http tcp 80 80 0.0.0.0/0</userinput>
<computeroutput>+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp | 80 | 80 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+</computeroutput></screen>
<para>Note that the arguments are positional and the
"from-port" and "to-port" arguments specify the local port
range connections are allowed to not source and
destination ports of the connection. More complex rule
sets can be built up through multiple invocations of nova
secgroup-add-rule. For example if you want to pass both
http and https traffic:</para>
<screen><prompt>$</prompt> <userinput>nova secgroup-add-rule global_http tcp 443 443 0.0.0.0/0</userinput><computeroutput>+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp | 443 | 443 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+</computeroutput></screen>
<para>Despite only outputting the newly added rule this
operation is additive:</para>
<screen><prompt>$</prompt> <userinput>nova secgroup-list-rules global_http</userinput><computeroutput>+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp | 80 | 80 | 0.0.0.0/0 | |
| tcp | 443 | 443 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+</computeroutput></screen>
<para>The inverse operation is called secgroup-delete-rule,
using the same format. Whole security groups can be
removed with secgroup-delete.</para>
<para>To create security group rules for a cluster of
instances, you want to use SourceGroups.</para>
<para>SourceGroups are a special dynamic way of defining the CIDR of allowed sources. The
user specifies a SourceGroup (Security Group name) and then all the users' other Instances using
the specified SourceGroup are selected dynamically. This dynamic
selection alleviates the need for
individual rules to allow each new member of the cluster.</para>
<para>Example usage: <code>nova secgroup-add-group-rule &lt;secgroup&gt; &lt;source-group&gt; &lt;ip-proto&gt; &lt;from-port&gt; &lt;to-port&gt;</code></para>
<screen><prompt>$</prompt> <userinput>nova secgroup-add-group-rule cluster global-http tcp 22 22</userinput></screen>
<para>The "cluster" rule allows ssh access from any other
instance that uses the "global-http" group.</para>
</section>
</section>
<?hard-pagebreak?>
<section xml:id="user_facing_block_storage">
<?dbhtml stop-chunking?>
<title>Block Storage</title>
<para>OpenStack volumes are persistent block storage devices
which may be attached and detached from instances, but can
only be attached to one instance at a time. Similar to an
external hard drive, they do not provide shared storage in
the way a network file system or object store does. It is
left to the operating system in the instance to put a file
system on the block device and mount it, or not.</para>
<para>Similar to other removable disk technology it is
important the operating system is not trying to make use
of the disk before removing it. On Linux instances this
typically involves unmounting any file systems mounted
from the volume. The OpenStack volume service cannot tell
if it is safe to remove volumes from an instance so it
does what it is told. If a user tells the volume service
to detach a volume from an instance while it is being
written to you can expect some level of file system
corruption as well as faults from whatever process within
the instance was using the device.</para>
<para>There is nothing OpenStack specific in being aware of
the steps needed from with in the instance operating
system to access block devices, potentially formatting
them for first use and being cautious when removing
devices. What is specific is how to create new volumes and
attach and detach them from instances. These operations
can all be done from the "Volumes" page of the dashboard
or using the cinder command line client.</para>
<para>To add new volumes you only need a name and a volume
size in gigabytes, ether put these into the "create
volume" web form or using the command line:</para>
<screen><prompt>$</prompt> <userinput>cinder create --display-name
test-volume 10</userinput></screen>
<para>This creates a 10 GB volume named "test-volume." To list
existing volumes and the instances they are connected to
if any:</para>
<screen><prompt>$</prompt> <userinput>cinder list</userinput></screen>
<screen><computeroutput>+------------+---------+--------------------+------+-------------+-------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+------------+---------+--------------------+------+-------------+-------------+
| 0821...19f | active | test-volume | 10 | None | |
+------------+---------+--------------------+------+-------------+-------------+</computeroutput></screen>
<para>The Block Storage service also allows for creating
snapshots of volumes. Remember this is a block level
snapshot which is crash consistent so it is best if the
volume is not connected to an instance when the snapshot
is taken and second best if the volume is not in use on
the instance it is attached to. If the volume is under
heavy use, the snapshot may have an inconsistent file
system. In fact, by default, the volume service does not
take a snapshot of a volume that is attached to an image,
though it can be forced. To take a volume snapshot either
select "Create Snapshot" from the actions column next to
the volume name in the dashboard volume page, or from the
command line:</para>
<screen><computeroutput>usage: cinder snapshot-create [--force &lt;True|False&gt;]
[--display-name &lt;display-name&gt;]
[--display-description &lt;display-description&gt;]
&lt;volume-id&gt;
Add a new snapshot.
Positional arguments: &lt;volume-id&gt; ID of the volume to snapshot
Optional arguments: --force &lt;True|False&gt; Optional flag to indicate whether to snapshot a volume even if its attached to an instance. (Default=False) --display-name &lt;display-name&gt; Optional snapshot name. (Default=None)
--display-description &lt;display-description&gt;
Optional snapshot description. (Default=None)</computeroutput></screen>
<section xml:id="block_storage_creation_failures">
<title>Block Storage Creation Failures</title>
<para>If a user tries to create a volume and it
immediately goes into an error state, the best way to
troubleshoot is to grep the Cinder log files for the
volume's UUID. First try the log files on the cloud
controller and then try the storage node where they
volume was attempted to be created:</para>
<screen><prompt>#</prompt> <userinput>grep 903b85d0-bacc-4855-a261-10843fc2d65b /var/log/cinder/*.log</userinput></screen>
</section>
</section>
<section xml:id="instances">
<?dbhtml stop-chunking?>
<title>Instances</title>
<para>Instances are the running virtual machines within an
OpenStack cloud. This section deals with how to work with
them and their underlying images, their network properties
and how they are represented in the database.</para>
<section xml:id="start_instances">
<title>Starting Instances</title>
<para>To launch an instance you need to select an image, a
flavor, and a name. The name needn't be unique but
your life is simpler if it is because many tools will
use the name in place of UUID so long as the name is
unique. This can be done from the dashboard either
from the "Launch Instance" button on the "Instances"
page or by selecting the "Launch" action next to an
image or snapshot on the "Images &amp; Snapshots"
page.</para>
<para>On the command line:</para>
<screen><prompt>$</prompt> <userinput>nova boot --flavor
&lt;flavor&gt; --image &lt;image&gt; &lt;name&gt;</userinput></screen>
<para>There are a number of optional items that can be
specified. You should read the rest of this instances
section before trying to start one, but this is the
base command that later details are layered
upon.</para>
<para>To delete instances from the dashboard select the
"Terminate instance" action next to the instance on
the "Instances" page, from the command line:</para>
<screen><prompt>$</prompt> <userinput>nova delete
&lt;instance-uuid&gt;</userinput></screen>
<para>It is important to note that powering off an
instance does not terminate it in the OpenStack
sense.</para>
</section>
<section xml:id="instance_boot_failures">
<title>Instance Boot Failures</title>
<para>If an instance fails to start and immediately moves
to "Error" state there are a few different ways to
track down what has gone wrong. Some of these can be
done with normal user access while others require
access to your log server or compute nodes.</para>
<para>The simplest reasons for nodes to fail to launch are
quota violations or the scheduler being unable to find
a suitable compute node on which to run the instance.
In these cases the error is apparent doing a
<code>nova show</code> on the faulted
instance.</para>
<screen><prompt>$</prompt> <userinput>nova show test-instance</userinput></screen>
<screen><?db-font-size 55%?>
<computeroutput>+------------------------+-----------------------------------------------------\
| Property | Value /
+------------------------+-----------------------------------------------------\
| OS-DCF:diskConfig | MANUAL /
| OS-EXT-STS:power_state | 0 \
| OS-EXT-STS:task_state | None /
| OS-EXT-STS:vm_state | error \
| accessIPv4 | /
| accessIPv6 | \
| config_drive | /
| created | 2013-03-01T19:28:24Z \
| fault | {u'message': u'NoValidHost', u'code': 500, u'created/
| flavor | xxl.super (11) \
| hostId | /
| id | 940f3b2f-bd74-45ad-bee7-eb0a7318aa84 \
| image | quantal-test (65b4f432-7375-42b6-a9b8-7f654a1e676e) /
| key_name | None \
| metadata | {} /
| name | test-instance \
| security_groups | [{u'name': u'default'}] /
| status | ERROR \
| tenant_id | 98333a1a28e746fa8c629c83a818ad57 /
| updated | 2013-03-01T19:28:26Z \
| user_id | a1ef823458d24a68955fec6f3d390019 /
+------------------------+-----------------------------------------------------\</computeroutput>
</screen>
<para>In this case looking at the "fault" message shows
NoValidHost indicating the scheduler was unable to
match the instance requirements.</para>
<para>If <code>nova show</code> does not sufficiently
explain the failure searching for the instance UUID in
the <code>nova-compute.log</code> on the compute node
it was scheduled on or the
<code>nova-scheduler.log</code> on your scheduler
hosts is a good place to start looking for lower level
problems.</para>
<para>Using <code>nova show</code> as an admin user will
show the compute node the instance was scheduled on as
<code>hostId</code>, if the instance failed during
scheduling this field is blank.</para>
</section>
<section xml:id="instance_specific_data">
<title>Using Instance-specific Data</title>
<para>There are two main types of instance-specific data: metadata and user data.</para>
<section xml:id="instance_metadata">
<title>Instance Metadata</title>
<para>For Compute, instance metadata is a collection of
key/value pairs associated with an instance. Compute
reads and writes to these key/value pairs any time
during the instance lifetime, from inside and outside
the instance, when the end-user uses the Compute API
to do so. However, you cannot query the instance
associated key/value pairs with the metadata service
that is compatible with the Amazon EC2 metadata
service.</para>
<para>As an example of instance metadata, users can
generate and register ssh keys using the nova
command:</para>
<screen><prompt>$</prompt> <userinput>nova keypair-add mykey
&gt; mykey.pem</userinput></screen>
<para>This creates a key named <userinput>mykey</userinput> which you can
associate with instances. The file <filename>mykey.pem</filename> is the
private key which should be saved to a secure location
as it allows root access to instances the mykey key is
associated with.</para>
<para>Use this command to register an existing key
with OpenStack:</para>
<screen><prompt>$</prompt> <userinput>nova keypair-add
--pub-key mykey.pub mykey</userinput></screen>
<note><para>You must have the matching private key to access
instances associated with this key.</para></note>
<para>To associate a key with an instance on boot add
<code>--key_name mykey</code> to your command line.
For example:</para>
<screen><prompt>$</prompt> <userinput>nova
boot --image ubuntu-cloudimage --flavor 2 --key_name mykey myimage</userinput></screen>
<para>When booting a server, you can also add
arbitrary metadata, so that you can more easily
identify it amongst other running instances. Use
the <code>--meta</code> option with a key=value
pair, where you can make up the string for both
the key and the value. For example, you could add
a description and also the creator of the
server:</para>
<screen><prompt>$</prompt> <userinput>nova boot
--image=test-image --flavor=1 --meta description='Small test
image' smallimage</userinput></screen>
<para>When viewing the server information, you can see the
metadata included on the metadata line:</para>
<screen><prompt>$</prompt> <userinput>nova show smallimage</userinput></screen>
<screen><computeroutput>+------------------------+-----------------------------------------+
| Property | Value |
+------------------------+-----------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2012-05-16T20:48:23Z |
| flavor | m1.small |
| hostId | de0...487 |
| id | 8ec...f915 |
| image | natty-image |
| key_name | |
| metadata | {u'description': u'Small test image'} |
| name | smallimage |
| private network | 172.16.101.11 |
| progress | 0 |
| public network | 10.4.113.11 |
| status | ACTIVE |
| tenant_id | e83...482 |
| updated | 2012-05-16T20:48:35Z |
| user_id | de3...0a9 |
+------------------------+-----------------------------------------+</computeroutput></screen>
</section>
<section xml:id="instance_user_data">
<title>Instance User Data</title>
<para>The <code>user-data</code> key is a special key in the
metadata service which holds a file that cloud-aware
applications within the guest instance can access.
For example <link
xlink:title="OpenStack Image Service"
xlink:href="https://help.ubuntu.com/community/CloudInit"
>cloudinit</link> (https://help.ubuntu.com/community/CloudInit)
is an open source package from Ubuntu, but available in most
distributions, that handles early initialization of a cloud
instance that makes use of this user data.</para>
<para>This user data can be put in a file on your local
system and then passed in at instance creation with
the flag <code>--user-data &lt;user-data-file&gt;</code>.
For example:</para>
<screen><prompt>$</prompt> <userinput>nova boot --image
ubuntu-cloudimage --flavor 1 --user-data mydata.file</userinput></screen>
<para>To understand the difference between user data and
metadata, realize that user data is created before an
instance is started. User data is accessible
from within the instance when it is running. User data
can be used to store configuration, a script,
or anything the tenant wants.</para>
</section>
<section xml:id="file_injection">
<title>File Injection</title>
<para>Arbitrary local files can also be placed into the
instance file system at creation time using the <code>--file
&lt;dst-path=src-path&gt;</code> option. You may store up to
5 files. For example if you have a special
authorized_keys file named special_authorized_keysfile
that you want to put on the instance rather than using
the regular ssh key injection for some reason you can
use the following command:</para>
<screen><prompt>$</prompt> <userinput>nova boot --image
ubuntu-cloudimage --flavor 1 --file
/root/.ssh/authorized_keys=special_authorized_keysfile</userinput></screen>
</section>
</section>
</section>
<section xml:id="associate_security_groups">
<title>Associating Security Groups</title>
<para>Security groups as discussed earlier are typically
required to allow network traffic to an instance, unless
the default security group for a project has been modified
to be more permissive.</para>
<para>Adding security groups is typically done on instance
boot. When launching from the dashboard this is on the
"Access &amp; Security" tab of the "Launch Instance"
dialog. When launching from the command line append
<code>--security-groups</code> with a comma separated list of
security groups.</para>
<para>It is also possible to add and remove security groups
when an instance is running. Currently this is only
available through the command line tools.</para>
<screen><prompt>$</prompt> <userinput>nova add-secgroup
&lt;server&gt; &lt;securitygroup&gt;</userinput></screen>
<screen><prompt>$</prompt> <userinput>nova remove-secgroup
&lt;server&gt; &lt;securitygroup&gt;</userinput></screen>
</section>
<section xml:id="floating_ips">
<title>Floating IPs</title>
<para>Where Floating IPs are configured in a deployment,
each project will have a limited number of Floating IPs
controlled by a quota. However, these need to
be allocated to the project from the central pool prior
to their use - usually by the administrator of the project.
To allocate a Floating IP to a project there is an
<guibutton>Allocate IP to Project</guibutton> button on the
"Access &amp; Security" page of the dashboard. The command
line can also be used:</para>
<screen><prompt>$</prompt> <userinput>nova floating-ip-create</userinput></screen>
<para>Once allocated, a Floating IP can be assigned to running
instances from the dashboard either by selecting the
<guibutton>Associate Floating IP</guibutton> from the actions drop down next to
the IP on the <guilabel>Access &amp; Security</guilabel> page or the same
action next to the instance you wish to associate it with
on the <guilabel>Instances</guilabel> page. The inverse action,
<guibutton>Dissociate Floating IP</guibutton>, is only available from the
<guilabel>Access &amp;
Security</guilabel> page and not from the
<guilabel>Instances</guilabel> page.</para>
<para>To associate or disassociate a Floating IP to a server
from the command line, use the following commands:
</para>
<screen><prompt>$</prompt> <userinput>nova add-floating-ip &lt;server&gt; &lt;address&gt;</userinput></screen>
<screen><prompt>$</prompt> <userinput>nova remove-floating-ip &lt;server&gt; &lt;address&gt;</userinput></screen>
</section>
<section xml:id="attach_block_storage">
<title>Attaching Block Storage</title>
<para>You can attach block storage to instances from the dashboard on
the <guilabel>Volumes</guilabel> page. Click the <guibutton>Edit
Attachments</guibutton> action next to the volume you wish to
attach.</para>
<para>To perform this action from command line, run the following
command:</para>
<screen><prompt>$</prompt> <userinput>nova volume-attach &lt;server&gt; &lt;volume&gt; &lt;device&gt;</userinput></screen>
<para>You can also specify block device mapping at instance boot time
through the nova command line client, as follows:</para>
<screen><userinput>--block-device-mapping &lt;dev-name=mapping&gt;</userinput></screen>
<para>The block device mapping format is
<code>&lt;dev-name&gt;=&lt;id&gt;:&lt;type&gt;:&lt;size(GB)&gt;:&lt;delete-on-terminate&gt;</code>,
where:</para>
<variablelist>
<varlistentry>
<term>dev-name</term>
<listitem>
<para>A device name where the volume is attached in the
system at <code>/dev/<replaceable>dev_name</replaceable>
</code>.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>id</term>
<listitem>
<para>The ID of the volume to boot from, as shown in the
output of nova volume-list.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>type</term>
<listitem>
<para>Either <literal
>snap</literal>, which means that the volume was
created from a snapshot, or anything other than <literal
>snap</literal> (a blank string is valid). In the
example above, the volume was not created from a
snapshot, so we leave this field blank in our example
below.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>size (GB)</term>
<listitem>
<para>The size of the volume, in GB. It is safe to leave
this blank and have the Compute service infer the
size.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>delete-on-terminate</term>
<listitem>
<para>A boolean to indicate whether the volume should be
deleted when the instance is terminated. True can be
specified as <literal>True</literal> or
<literal>1</literal>. False can be specified as
<literal>False</literal> or <literal>0</literal>.</para>
</listitem>
</varlistentry>
</variablelist>
<para>The following command will boot a new instance and
attach a volume at the same time. The volume of ID 13 will
be attached as <code>/dev/vdc</code>, is not a snapshot,
does not specify a size, and will not be deleted when the
instance is terminated:</para>
<screen><prompt>$</prompt> <userinput>nova boot --image 4042220e-4f5e-4398-9054-39fbd75a5dd7
--flavor 2 --key-name mykey --block-device-mapping vdc=13:::0 boot-with-vol-test</userinput></screen>
<para>If you have previously prepared the block storage with a
bootable file system image it is even possible to boot
from persistent block storage. The following command
boots an image from the specified volume. It is
the similar to the previous command but the image is
omitted and the volume is now attached as
<code>/dev/vda</code>:</para>
<screen><prompt>$</prompt> <userinput>nova boot --flavor 2 --key-name mykey
--block-device-mapping vda=13:::0 boot-from-vol-test</userinput></screen>
<para>Read more detailed instructions for launching an instance from a
bootable volume in the <link
xlink:href="http://docs.openstack.org/user-guide/content/boot_from_volume.html"
>OpenStack End User Guide</link>.</para>
<para>To boot normally from an image and attach block storage, map to a
device other than vda. You can find instructions for launching an
instance and attaching a volume to the instance, copying the image
to the attached volume in the <link xlink:href="http://docs.openstack.org/user-guide/content/dashboard_launch_instances_from_volume.html">OpenStack End User Guide</link>.</para>
</section>
<section xml:id="snapshots">
<?dbhtml stop-chunking?>
<title>Taking Snapshots</title>
<para>The OpenStack snapshot mechanism allows you to create new
images from running instances. This is very convenient
for upgrading base images or taking a published image and
customizing for local use. To snapshot a running instance
to an image using the CLI:</para>
<screen><prompt>$</prompt> <userinput>nova image-create &lt;instance
name or uuid&gt; &lt;name of new image&gt;</userinput></screen>
<para>The dashboard interface for snapshots can be confusing
because the Images &amp; Snapshots page splits content up
into:</para>
<itemizedlist role="compact">
<listitem>
<para>Images</para>
</listitem>
<listitem>
<para>Instance snapshots</para>
</listitem>
<listitem>
<para>Volume snapshots</para>
</listitem>
</itemizedlist>
<para>However, an instance snapshot <emphasis>is</emphasis> an
image. The only difference between an image that you
upload directly to glance and an image you create by
snapshot is that an image created by snapshot has
additional properties in the glance database. These
properties are found in the image_properties table, and
include:</para>
<informaltable rules="all">
<thead>
<tr>
<th>name</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td><para>image_type</para></td>
<td><para>snapshot</para></td>
</tr>
<tr>
<td><para>instance_uuid</para></td>
<td><para>&lt;uuid of instance that was
snapshotted&gt;</para></td>
</tr>
<tr>
<td><para>base_image_ref</para></td>
<td><para>&lt;uuid of original image of
instance that was
snapshotted&gt;</para></td>
</tr>
<tr>
<td><para>image_location</para></td>
<td><para>snapshot</para></td>
</tr>
</tbody>
</informaltable>
<section xml:id="live-snapshots">
<title>Live snapshots</title>
<para>Live snapshots is a feature that allows users to snapshot the running virtual
machines without pausing them. These snapshots are simply disk-only snapshots.
Snapshotting an instance can now be performed with no downtime (assuming QEMU 1.3+
and libvirt 1.0+ are used).</para>
<sidebar>
<title>Ensuring snapshots are consistent</title>
<para>The following section is from Sébastien Han's <link
xlink:title="OpenStack Image Service"
xlink:href="http://www.sebastien-han.fr/blog/2012/12/10/openstack-perform-consistent-snapshots/"
>OpenStack: Perform Consistent Snapshots blog
entry</link>
(http://www.sebastien-han.fr/blog/2012/12/10/openstack-perform-consistent-snapshots/).</para>
<para>A snapshot captures the state of the file system,
but not the state of the memory. Therefore, to ensure
your snapshot contains the data that you want, before
your snapshot you need to ensure that:</para>
<itemizedlist role="compact">
<listitem>
<para>Running programs have written their contents
to disk</para>
</listitem>
<listitem>
<para>The file system does not have any "dirty"
buffers: where programs have issued the
command to write to disk, but the operating
system has not yet done the write</para>
</listitem>
</itemizedlist>
<para>To ensure that important services have written their
contents to disk (such as, databases), we recommend
you read the documentation for those applications to
determine what commands to issue to have them sync
their contents to disk. If you are unsure how to do
this, the safest approach is to simply stop these
running services normally.</para>
<para>To deal with the "dirty" buffer issue, we recommend
using the sync command before snapshotting:</para>
<screen><prompt>#</prompt> <userinput>sync</userinput></screen>
<para>Running <code>sync</code> writes dirty buffer
(buffered block that have been modified but not
written yet to the disk block) to disk.</para>
<para>Just running <code>sync</code> is not enough to
ensure the file system is consistent. We recommend you
use the <code>fsfreeze</code> tool, which halts new
access to the file system and create a stable image on
disk that is suitable for snapshotting. fsfreeze
supports several file systems, including ext3, ext4,
and XFS. If your virtual machine instance is running
on Ubuntu, install the util-linux package to get
fsfreeze:</para>
<screen><prompt>#</prompt> <userinput>apt-get install
util-linux</userinput></screen>
<para>If your operating system doesn't have a version of
fsfreeze available, you can use xfs_freeze instead,
which is available on Ubuntu in the xfsprogs package.
Despite the "xfs" in the name, xfs_freeze also works
on ext3 and ext4 if you are using a Linux kernel
version 2.6.29 or greater, since it works at the
virtual file system (VFS) level starting at 2.6.29.
xfs_freeze supports the same command-line arguments as
fsfreeze.</para>
<para>Consider the example where you want to take a
snapshot of a persistent block storage volume,
detected by the guest operating system as /dev/vdb and
mounted on /mnt. The fsfreeze command accepts 2
arguments:</para>
<itemizedlist role="compact">
<listitem>
<para>-f: freeze the system</para>
</listitem>
<listitem>
<para>-u: thaw (un-freeze) the system</para>
</listitem>
</itemizedlist>
<para>To freeze the volume in preparation for
snapshotting, you would do, as root, inside of the
instance:</para>
<screen><prompt>#</prompt> <userinput>fsfreeze -f
/mnt</userinput></screen>
<para>You <emphasis role="bold">must mount the file
system</emphasis> before you run the
<command>fsfreeze</command> command.</para>
<para>When the "fsfreeze -f" command is issued, all
ongoing transactions in the file system are allowed to
complete, new write system calls are halted, and other
calls which modify the file system are halted. Most
importantly, all dirty data, metadata, and log
information are written to disk.</para>
<para>Once the volume has been frozen, do not attempt to
read from or write to the volume, as these operations
hang. The operating system stops every I/O operation
and any I/O attempts is delayed until the file system
has been unfrozen.</para>
<para>Once you have issued the fsfreeze command, it is
safe to perform the snapshot. For example, if your
instance was named mon-instance, and you wanted to
snapshot it to an image, named mon-snapshot, you could
now run the following:</para>
<screen><prompt>$</prompt> <userinput>nova image-create
mon-instance mon-snapshot</userinput></screen>
<para>When the snapshot is done, you can thaw the file
system with the following command, as root, inside of
the instance:</para>
<screen><prompt>#</prompt> <userinput>fsfreeze -u
/mnt</userinput></screen>
<para>If you want to backup the root file system, you
can't simply do the command above because it will
freeze the prompt. Instead, run the following
one-liner, as root, inside of the instance:</para>
<screen><prompt>#</prompt> <userinput>fsfreeze -f / &amp;&amp;
sleep 30 &amp;&amp; fsfreeze -u /</userinput></screen>
</sidebar>
</section>
</section>
<section xml:id="database_instances">
<?dbhtml stop-chunking?>
<title>Instances in the Database</title>
<para>While instance information is stored in a number of
database tables, the table operators are most likely to
need to look at in relation to user instances is the
"instances" table.</para>
<para>The instances table carries most of the information
related to both running and deleted instances. It has a
bewildering array of fields, for an exhaustive list look
at the database. These are the most useful fields for
operators looking to form queries:</para>
<itemizedlist>
<listitem><para>The "deleted" field is set to "1" if the instance has
been deleted and NULL if it has not been deleted this
important for excluding deleted instances from your
queries.</para></listitem>
<listitem><para>The "uuid" field is the UUID of the instance and is used
through out other tables in the database as a foreign key.
This id is also reported in logs, the dashboard and
command line tools to uniquely identify an
instance.</para></listitem>
<listitem><para>A collection of foreign keys are available to find
relations to the instance. The most useful of these are
"user_id" and "project_id" are the UUIDs of the user who
launched the instance and the project it was launched
in.</para></listitem>
<listitem><para>The "host" field tells which compute node is hosting the
instance.</para></listitem>
<listitem><para>The "hostname" field holds the name of the instance when
it is launched. The "display-name" is initially the same
as hostname but can be reset using the nova rename
command.</para></listitem>
</itemizedlist>
<para>A number of time related fields are useful for tracking
when state changes happened on an instance:</para>
<itemizedlist role="compact">
<listitem>
<para>created_at</para>
</listitem>
<listitem>
<para>updated_at</para>
</listitem>
<listitem>
<para>deleted_at</para>
</listitem>
<listitem>
<para>scheduled_at</para>
</listitem>
<listitem>
<para>launched_at</para>
</listitem>
<listitem>
<para>terminated_at</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="user-facing-outro">
<title>Good Luck!</title>
<para>This section was intended as a brief introduction to
some of the most useful of many OpenStack commands. For
an exhaustive list please refer to the <link
xlink:href="http://docs.openstack.org/user-guide-admin/content/">Admin User
Guide</link>, and for additional hints and tips please see
the <link xlink:href="http://docs.openstack.org/admin-guide-cloud/content/">Cloud Admin Guide</link>.
We hope your users remain happy and recognise your hard work!
(For more hard work, turn the page to the next chapter where we discuss
the system-facing operations: Maintenance, Failures and Debugging.)</para>
</section>
</chapter>