Change service names with conventions

Follow the Documentaion conventions:
https://wiki.openstack.org/wiki/Documentation/Conventions

Change-Id: I16078f5bfc3c47002f43f23be83b03c1f6f938fe
This commit is contained in:
KATO Tomoyuki 2015-06-17 10:52:29 +09:00
parent 341e18b95d
commit 0393934828
15 changed files with 73 additions and 76 deletions

View File

@ -47,10 +47,10 @@
sites span multiple data centers, some use off compute node storage with
a shared file system, and some use on compute node storage with a
non-shared file system. Each site deploys the Image service with an
Object Storage back end. A central Identity Service, dashboard, and
Object Storage back end. A central Identity, dashboard, and
Compute API service are used. A login to the dashboard triggers a SAML
login with Shibboleth, which creates an <glossterm>account</glossterm>
in the Identity Service with a SQL back end. An Object Storage Global
in the Identity service with a SQL back end. An Object Storage Global
Cluster is used across several sites.</para>
<para>Compute nodes have 24 to 48 cores, with at least 4 GB of RAM per
@ -136,7 +136,7 @@
core.</para>
<para>With our upgrade to Grizzly in August 2013, we moved to OpenStack
Networking Service, neutron (quantum at the time). Compute nodes have
Networking, neutron (quantum at the time). Compute nodes have
two-gigabit network interfaces and a separate management card for IPMI
management. One network interface is used for node-to-node
communications. The other is used as a trunk port for OpenStack managed
@ -267,7 +267,7 @@
instance provisioning.</para>
<para>Users and groups are managed through Active Directory and imported
into the Identity Service using LDAP.&#160;CLIs are available for nova
into the Identity service using LDAP.&#160;CLIs are available for nova
and Euca2ools to do this.</para>
<para>There are three clouds currently running at CERN, totaling about

View File

@ -121,7 +121,7 @@
<listitem>
<para>Offers each service's REST API access, where the API endpoint
catalog is managed by the Identity Service</para>
catalog is managed by the Identity service</para>
</listitem>
</varlistentry>
</variablelist>
@ -321,7 +321,7 @@
<td><para>This deployment felt that the spare I/O on the Object
Storage proxy server was sufficient and that the Image Delivery
portion of glance benefited from being on physical hardware and
having good connectivity to the Object Storage backend it was
having good connectivity to the Object Storage back end it was
using.</para></td>
</tr>
@ -587,7 +587,7 @@
<para>The OpenStack Image service consists of two parts:
<code>glance-api</code> and <code>glance-registry</code>. The former is
responsible for the delivery of images; the compute node uses it to
download images from the backend. The latter maintains the metadata
download images from the back end. The latter maintains the metadata
information associated with virtual machine images and requires a
database.<indexterm class="singular">
<primary>glance</primary>
@ -612,7 +612,7 @@
</indexterm></para>
<para>The <code>glance-api</code> part is an abstraction layer that allows
a choice of backend. Currently, it supports:</para>
a choice of back end. Currently, it supports:</para>
<variablelist>
<varlistentry>
@ -705,22 +705,22 @@
group. Resources quotas, such as the number of cores that can be used,
disk space, and so on, are associated with a project.</para>
<para>The OpenStack Identity Service (keystone) is the point that provides
<para>OpenStack Identity is the service that provides
the authentication decisions and user attribute information, which is then
used by the other OpenStack services to perform authorization. Policy is
set in the <filename>policy.json</filename> file. For <phrase
role="keep-together">information</phrase> on how to configure these, see
<xref linkend="projects_users" />.<indexterm class="singular">
<primary>Identity Service</primary>
<primary>Identity</primary>
<secondary>authentication decisions</secondary>
</indexterm><indexterm class="singular">
<primary>Identity Service</primary>
<primary>Identity</primary>
<secondary>plug-in support</secondary>
</indexterm></para>
<para>The Identity Service supports different plug-ins for authentication
<para>OpenStack Identity supports different plug-ins for authentication
decisions and identity storage. Examples of these plug-ins include:</para>
<itemizedlist role="compact">
@ -752,7 +752,7 @@
<para>Because the cloud controller handles so many different services, it
must be able to handle the amount of traffic that hits it. For example, if
you choose to host the OpenStack Imaging Service on the cloud controller,
you choose to host the OpenStack Image service on the cloud controller,
the cloud controller should be able to support the transferring of the
images at an acceptable speed.<indexterm class="singular">
<primary>cloud controllers</primary>

View File

@ -20,7 +20,7 @@
each as well as a rationale for why it worked well in a given
environment.</para>
<para>Because OpenStack is highly configurable, with many different backends
<para>Because OpenStack is highly configurable, with many different back ends
and network configuration options, it is difficult to write documentation
that covers all possible OpenStack deployments. Therefore, this guide
defines example architectures to simplify the task of documenting, as well
@ -48,4 +48,4 @@
or the <link xlink:href="http://www.openstack.org/user-stories/">OpenStack User Stories
page</link>.</para>
</section>
</chapter>
</chapter>

View File

@ -193,8 +193,8 @@
<para>These volumes are persistent: they can be detached from one
instance and re-attached to another, and the data remains intact. Block
storage is implemented in OpenStack by the OpenStack Block Storage
(cinder) project, which supports multiple backends in the form of
drivers. Your choice of a storage backend must be supported by a Block
(cinder) project, which supports multiple back ends in the form of
drivers. Your choice of a storage back end must be supported by a Block
Storage driver.</para>
<para>Most block storage drivers allow the instance to have direct
@ -351,14 +351,14 @@
instantly when starting a new instance. For other systems, ephemeral
storage—storage that is released when a VM attached to it is shut down— is
the preferred way. When you select <glossterm>storage
backend</glossterm>s, <indexterm class="singular">
back end</glossterm>s, <indexterm class="singular">
<primary>storage</primary>
<secondary>choosing backends</secondary>
<secondary>choosing back ends</secondary>
</indexterm><indexterm class="singular">
<primary>storage backend</primary>
<primary>storage back end</primary>
</indexterm><indexterm class="singular">
<primary>backend interactions</primary>
<primary>back end interactions</primary>
<secondary>store</secondary>
</indexterm>ask the following questions on behalf of your users:</para>
@ -609,7 +609,7 @@
<title>Commodity Storage Backend Technologies</title>
<para>This section provides a high-level overview of the differences
among the different commodity storage backend technologies. Depending on
among the different commodity storage back end technologies. Depending on
your cloud user's needs, you can implement one or many of these
technologies in different combinations:<indexterm class="singular">
<primary>storage</primary>
@ -661,7 +661,7 @@
storage, and file-system interfaces, although the file-system
interface is not yet considered production-ready. Ceph supports
the same API as swift for object storage and can be used as a
backend for cinder block storage as well as backend storage for
back end for cinder block storage as well as back-end storage for
glance images. Ceph supports "thin provisioning," implemented
using copy-on-write.</para>
@ -697,7 +697,7 @@
3.3, you can use Gluster to consolidate your object storage and
file storage into one unified file and object storage solution,
which is called Gluster For OpenStack (GFO). GFO uses a customized
version of swift that enables Gluster to be used as the backend
version of swift that enables Gluster to be used as the back-end
storage.</para>
<para>The main reason to use GFO rather than regular swift is if
@ -717,7 +717,7 @@
<listitem>
<para>The Logical Volume Manager is a Linux-based system that
provides an abstraction layer on top of physical disks to expose
logical volumes to the operating system. The LVM backend
logical volumes to the operating system. The LVM back-end
implements block storage as LVM logical partitions.</para>
<para>On each host that will house block storage, an administrator
@ -748,7 +748,7 @@
number of advantages over ext4, including improved data-integrity
checking.</para>
<para>The ZFS backend for OpenStack Block Storage supports only
<para>The ZFS back end for OpenStack Block Storage supports only
Solaris-based systems, such as Illumos. While there is a Linux
port of ZFS, it is not included in any of the standard Linux
distributions, and it has not been tested with OpenStack Block
@ -758,7 +758,7 @@
failures.</para>
<para>We don't recommend ZFS unless you have previous experience
with deploying it, since the ZFS backend for Block Storage
with deploying it, since the ZFS back end for Block Storage
requires a Solaris-based operating system, and we assume that your
experience is primarily with Linux-based systems.</para>
</listitem>

View File

@ -166,7 +166,7 @@ find $backup_dir -ctime +7 -type f -delete</programlisting>
<para><code>/var/lib/glance</code> should also be backed up. Take
special notice of <code>/var/lib/glance/images</code>. If you are using
a file-based backend of glance, <code>/var/lib/glance/images</code> is
a file-based back end of glance, <code>/var/lib/glance/images</code> is
where the images are stored and care should be taken.</para>
<para>There are two ways to ensure stability with this directory. The
@ -183,7 +183,7 @@ backup-server:/var/lib/glance/images/</programlisting>
<para><code>/etc/keystone</code> and <code>/var/log/keystone</code>
follow the same rules as other components.<indexterm class="singular">
<primary>Identity Service</primary>
<primary>Identity</primary>
<secondary>backup/recovery</secondary>
</indexterm></para>

View File

@ -243,7 +243,7 @@ RABBIT_PASSWORD=devstack
SERVICE_PASSWORD=devstack
SERVICE_TOKEN=devstack
# OpenStack Identity Service branch
# OpenStack Identity branch
KEYSTONE_BRANCH=stable/havana
# OpenStack Compute branch
@ -255,7 +255,7 @@ CINDER_BRANCH=stable/havana
# OpenStack Image service branch
GLANCE_BRANCH=stable/havana
# OpenStack Dashboard branch
# OpenStack dashboard branch
HORIZON_BRANCH=stable/havana
# OpenStack Object Storage branch

View File

@ -502,10 +502,10 @@ cloud.example.com nova</programlisting>
<para>With these two tables, you now have a good overview of what
servers and services make up your cloud.</para>
<para>You can also use the Identity Service (keystone) to see what
<para>You can also use the Identity service (keystone) to see what
services are available in your cloud as well as what endpoints have been
configured for the services.<indexterm class="singular">
<primary>Identity Service</primary>
<primary>Identity</primary>
<secondary>displaying services and endpoints with</secondary>
</indexterm></para>

View File

@ -209,7 +209,7 @@
2013-02-25 21:05:51 17409 TRACE cinder</computeroutput></screen>
<para>In this example, <literal>cinder-volumes</literal> failed to start
and has provided a stack trace, since its volume backend has been unable
and has provided a stack trace, since its volume back end has been unable
to set up the storage volume—probably because the LVM volume that is
expected from the configuration does not exist.</para>
@ -825,7 +825,7 @@ notification_driver=messagingv2</programlisting>
<para>But how can you tell whether images are being successfully
uploaded to the Image service? Maybe the disk that Image service is
storing the images on is full or the S3 backend is down. You could
storing the images on is full or the S3 back end is down. You could
naturally check this by doing a quick image upload:</para>
<?hard-pagebreak ?>

View File

@ -928,7 +928,7 @@ inner join nova.instances on cinder.volumes.instance_uuid=nova.instances.uuid
deployed to your cloud: use an automated deployment system to bootstrap
the bare-metal server with the operating system and then have a
configuration-management system install and configure OpenStack Compute.
Once the Compute Service has been installed and configured in the same
Once the Compute service has been installed and configured in the same
way as the other compute nodes, it automatically attaches itself to the
cloud. The cloud controller notices the new node(s) and begins
scheduling instances to launch there.<indexterm class="singular">

View File

@ -145,9 +145,9 @@
<title>Visualizing OpenStack Networking Service Traffic in the
Cloud</title>
<para>The OpenStack Networking Service, neutron, has many more degrees of
<para>OpenStack Networking has many more degrees of
freedom than <literal>nova-network</literal> does because of its pluggable
backend. It can be configured with open source or vendor proprietary
back end. It can be configured with open source or vendor proprietary
plug-ins that control software defined networking (SDN) hardware or
plug-ins that use Linux native facilities on your hosts, such as Open
vSwitch or Linux Bridge.<indexterm class="startofrange" xml:id="Topen">
@ -164,8 +164,8 @@
various components involved however they are plumbed together in your
environment.</para>
<para>For this example, we will use the Open vSwitch (OVS) backend. Other
backend plug-ins will have very different flow paths. OVS is the most
<para>For this example, we will use the Open vSwitch (OVS) back end. Other
back-end plug-ins will have very different flow paths. OVS is the most
popularly deployed network driver, according to the October 2013 OpenStack
User Survey, with 50 percent more sites using it than the second place
Linux Bridge driver. We'll describe each step in turn, with <xref
@ -1134,7 +1134,7 @@ proto UDP (17), length 75)
<section xml:id="trouble_shooting_ovs">
<title>Troubleshooting Open vSwitch</title>
<para>Open vSwitch, as used in the previous OpenStack Networking Service
<para>Open vSwitch, as used in the previous OpenStack Networking
examples is a full-featured multilayer virtual switch licensed under the
open source Apache 2.0 license. Full documentation can be found at <link
xlink:href="http://openvswitch.org/">the project's website</link>. In

View File

@ -20,9 +20,9 @@
<para>While version 3 of the Identity API is available, the client tools
do not yet implement those calls, and most OpenStack clouds are still
implementing Identity API v2.0.<indexterm class="singular">
<primary>Identity Service</primary>
<primary>Identity</primary>
<secondary>Identity Service API</secondary>
<secondary>Identity service API</secondary>
</indexterm></para>
</warning>
@ -46,10 +46,10 @@
<secondary>definition of</secondary>
</indexterm></para>
<para>The initial implementation of the OpenStack Compute Service (nova)
<para>The initial implementation of OpenStack Compute
had its own authentication system and used the term
<literal>project</literal>. When authentication moved into the OpenStack
Identity Service (keystone) project, it used the term
Identity (keystone) project, it used the term
<literal>tenant</literal> to refer to a group of users. Because of this
legacy, some of the OpenStack tools refer to projects and some refer to
tenants.</para>
@ -156,7 +156,7 @@
</warning>
<para>Using the command-line interface, you can manage quotas for the
OpenStack Compute Service and the Block Storage Service.</para>
OpenStack Compute service and the Block Storage service.</para>
<para>Typically, default values are changed because a tenant requires more
than the OpenStack default of 10 volumes per tenant, or more than the
@ -217,12 +217,12 @@
<section xml:id="cli_set_compute_quotas">
<title>Set Compute Service Quotas</title>
<para>As an administrative user, you can update the Compute Service
<para>As an administrative user, you can update the Compute service
quotas for an existing tenant, as well as update the quota defaults for
a new tenant.<indexterm class="singular">
<primary>Compute</primary>
<secondary>Compute Service</secondary>
<secondary>Compute service</secondary>
</indexterm> See <xref linkend="compute-quota-table" />.</para>
<table rules="all" xml:id="compute-quota-table">
@ -593,7 +593,7 @@ Accept-Ranges: bytes</computeroutput></screen>
<title>Set Block Storage Quotas</title>
<para>As an administrative user, you can update the Block Storage
Service quotas for a tenant, as well as update the quota defaults for a
service quotas for a tenant, as well as update the quota defaults for a
new tenant. See <xref linkend="block-storage-quota-table" />.<indexterm
class="singular">
<primary>Block Storage</primary>

View File

@ -216,21 +216,20 @@
<orderedlist>
<listitem>
<para>Upgrade the OpenStack Identity Service
(keystone).</para>
<para>Upgrade OpenStack Identity.</para>
</listitem>
<listitem>
<para>Upgrade the OpenStack Image service (glance).</para>
<para>Upgrade the OpenStack Image service.</para>
</listitem>
<listitem>
<para>Upgrade OpenStack Compute (nova), including networking
<para>Upgrade OpenStack Compute, including networking
components.</para>
</listitem>
<listitem>
<para>Upgrade OpenStack Block Storage (cinder).</para>
<para>Upgrade OpenStack Block Storage.</para>
</listitem>
<listitem>
@ -332,9 +331,8 @@ scheduler=havana</programlisting>
Installation Guide</citetitle></link> and upgrading to the
same architecture for Havana. All nodes should run Ubuntu 12.04
LTS. This section primarily addresses upgrading core OpenStack
services, such as the Identity Service (keystone), Image service
(glance), Compute (nova) including networking, Block Storage
(cinder), and the dashboard.<indexterm class="startofrange"
services, such as Identity, Image service, Compute including networking,
Block Storage, and the dashboard.<indexterm class="startofrange"
xml:id="UPubuntu">
<primary>upgrading</primary>
<secondary>Grizzly to Havana (Ubuntu)</secondary>
@ -703,10 +701,9 @@ auth_uri = http://controller:5000</programlisting>
same architecture for Havana. All nodes should run Red Hat
Enterprise Linux 6.4 or compatible derivatives. Newer minor
releases should also work. This section primarily addresses
upgrading core OpenStack services, such as the Identity Service
(keystone), Image service (glance), Compute (nova) including
networking, Block Storage (cinder), and the dashboard.<indexterm
class="startofrange" xml:id="UPredhat">
upgrading core OpenStack services, such as the Identity,
Image service, Compute including networking, Block Storage,
and the dashboard.<indexterm class="startofrange" xml:id="UPredhat">
<primary>upgrading</primary>
<secondary>Grizzly to Havana (Red Hat)</secondary>
</indexterm></para>

View File

@ -68,7 +68,7 @@
</variablelist>
<para>As shown, end users can interact through the dashboard, CLIs, and
APIs. All services authenticate through a common Identity Service, and
APIs. All services authenticate through a common Identity service, and
individual services interact with each other through public APIs, except
where privileged administrator commands are necessary. <xref
linkend="openstack-diagram" /> shows the most common, but not the

View File

@ -101,19 +101,19 @@
</tr>
<tr>
<td><para>Image service (glance) backend</para></td>
<td><para>Image service back end</para></td>
<td><para>GlusterFS</para></td>
</tr>
<tr>
<td><para>Identity Service (keystone) driver</para></td>
<td><para>Identity driver</para></td>
<td><para>SQL</para></td>
</tr>
<tr>
<td><para>Block Storage Service (cinder) backend</para></td>
<td><para>Block Storage back end</para></td>
<td><para>GlusterFS</para></td>
</tr>
@ -176,7 +176,7 @@
<term>MySQL</term>
<listitem>
<para>MySQL is used as the database backend for all databases in
<para>MySQL is used as the database back end for all databases in
the OpenStack environment. MySQL is the supported database of
choice for Red Hat Enterprise Linux (and included in
distribution); the database is open source, scalable, and handles
@ -863,7 +863,7 @@
role="keep-together"><literal>qpid_heartbeat = </literal><phrase
role="keep-together"><literal>10</literal>,</phrase></phrase><phrase
role="keep-together"> configured to use a Gluster</phrase> volume
from the storage layer as the backend for Block Storage, using the
from the storage layer as the back end for Block Storage, using the
Gluster native client.</td>
<td>Block Storage API, scheduler, and volume services are run on all

View File

@ -142,13 +142,13 @@
</tr>
<tr>
<td><para>Identity Service (keystone) driver</para></td>
<td><para>Identity (keystone) driver</para></td>
<td><para>SQL</para></td>
</tr>
<tr>
<td><para>Block Storage Service (cinder) back end</para></td>
<td><para>Block Storage (cinder) back end</para></td>
<td><para>LVM/iSCSI</para></td>
</tr>
@ -321,8 +321,8 @@
your cloud will include Object Storage, you can easily add it as a
back end.</para>
<para>We chose the <emphasis>SQL back end for the Identity Service
(keystone)</emphasis> over others, such as LDAP. This back end is simple
<para>We chose the <emphasis>SQL back end for Identity</emphasis>
over others, such as LDAP. This back end is simple
to install and is robust. The authors acknowledge that many
installations want to bind with existing directory services and caution
careful understanding of the <link xlink:href="http://docs.openstack.org/havana/config-reference/content/ch_configuring-openstack-identity.html#configuring-keystone-for-ldap-backend"
@ -331,7 +331,7 @@
<para>Block Storage (cinder) is installed natively on external storage
nodes and uses the <emphasis>LVM/iSCSI plug-in</emphasis>. Most Block
Storage Service plug-ins are tied to particular vendor products and
Storage plug-ins are tied to particular vendor products and
implementations limiting their use to consumers of those hardware
platforms, but LVM/iSCSI is robust and stable on commodity
hardware.</para>
@ -346,15 +346,15 @@
</section>
<section xml:id="neutron">
<title>Why not use the OpenStack Network Service (neutron)?</title>
<title>Why not use OpenStack Networking?</title>
<para>This example architecture does not use the OpenStack Network
Service (neutron), because it does not yet support multi-host networking
<para>This example architecture does not use OpenStack Networking,
because it does not yet support multi-host networking
and our organizations (university, government) have access to a large
range of publicly-accessible IPv4 addresses.<indexterm class="singular">
<primary>legacy networking (nova)</primary>
<secondary>vs. OpenStack Network Service (neutron)</secondary>
<secondary>vs. OpenStack Networking (neutron)</secondary>
</indexterm></para>
</section>