Merge "Address editors comments on Chapter Scaling"

This commit is contained in:
Jenkins 2014-01-31 22:30:31 +00:00 committed by Gerrit Code Review
commit ea3a6c5f31
2 changed files with 38 additions and 20 deletions

View File

@ -12,11 +12,16 @@
xml:id="scaling"> xml:id="scaling">
<?dbhtml stop-chunking?> <?dbhtml stop-chunking?>
<title>Scaling</title> <title>Scaling</title>
<para>If your cloud is successful, eventually you must add <para>Where traditional applications required larger hardware to scale
resources to meet the increasing demand. OpenStack is designed ("vertical scaling"), cloud based applications typically request more,
discrete hardware ("horizontal scaling"). If your cloud is successful,
eventually you must add resources to meet the increasing demand.
To suit the cloud paradigm, OpenStack itself is designed
to be horizontally scalable. Rather than switching to larger to be horizontally scalable. Rather than switching to larger
servers, you procure more servers. Ideally, you scale out and servers, you procure more servers and simply install identically
load balance among functionally-identical services.</para> configured services. Ideally, you scale out and load balance among
groups of functionally-identical services (for example, "compute
nodes", "nova-api nodes"), which communicate on a message bus.</para>
<section xml:id="starting"> <section xml:id="starting">
<title>The Starting Point</title> <title>The Starting Point</title>
<para>Determining the scalability of your cloud and how to <para>Determining the scalability of your cloud and how to
@ -26,10 +31,16 @@
metrics.</para> metrics.</para>
<para>The starting point for most is the core count of your <para>The starting point for most is the core count of your
cloud. By applying some ratios, you can gather information cloud. By applying some ratios, you can gather information
about the number of virtual machines (VMs) you expect to about:
run <code>((overcommit fraction × cores) / virtual cores <itemizedlist>
per instance)</code>, how much storage is required <listitem><para>the number of virtual machines (VMs) you
expect to run
<code>((overcommit fraction × cores) / virtual cores per instance)</code>,
</para></listitem>
<listitem><para>how much storage is required
<code>(flavor disk size × number of instances)</code>. <code>(flavor disk size × number of instances)</code>.
</para></listitem>
</itemizedlist>
You can use these ratios to determine how much additional You can use these ratios to determine how much additional
infrastructure you need to support your cloud.</para> infrastructure you need to support your cloud.</para>
<para>The default OpenStack flavors are:</para> <para>The default OpenStack flavors are:</para>
@ -82,7 +93,7 @@
</tbody> </tbody>
</informaltable> </informaltable>
<?hard-pagebreak?> <?hard-pagebreak?>
<para>Assume that the following set-up supports (200 / 2) × 16 <para>The following set-up supports (200 / 2) × 16
= 1600 VM instances and requires 80 TB of storage for = 1600 VM instances and requires 80 TB of storage for
<code>/var/lib/nova/instances</code>:</para> <code>/var/lib/nova/instances</code>:</para>
<itemizedlist> <itemizedlist>
@ -131,9 +142,10 @@
performance (spindles/core), memory availability performance (spindles/core), memory availability
(RAM/core), network bandwidth (Gbps/core), and overall CPU (RAM/core), network bandwidth (Gbps/core), and overall CPU
performance (CPU/core).</para> performance (CPU/core).</para>
<para>For which metrics to track to determine how to scale <tip><para>For further discussion of metric tracking, including
your cloud, see <xref linkend="logging_monitoring"/>. how to extract metrics from your cloud, see
</para> <xref linkend="logging_monitoring"/>.
</para></tip>
</section> </section>
<?hard-pagebreak?> <?hard-pagebreak?>
<section xml:id="add_controller_nodes"> <section xml:id="add_controller_nodes">
@ -181,8 +193,19 @@
your cloud: <emphasis>cells</emphasis>, your cloud: <emphasis>cells</emphasis>,
<emphasis>regions</emphasis>, <emphasis>regions</emphasis>,
<emphasis>zones</emphasis> and <emphasis>host <emphasis>zones</emphasis> and <emphasis>host
aggregates</emphasis>. Each method provides different aggregates</emphasis>.</para>
functionality, as described in the following table:</para> <para>Each method provides different functionality, and can be best
divided into two groups:</para>
<itemizedlist>
<listitem>
<para>Cells and regions, which segregate an entire cloud and
result in running separate Compute deployments.</para>
</listitem>
<listitem>
<para><glossterm>Availability zone</glossterm>s and host
aggregates which merely divide a single Compute deployment.</para>
</listitem>
</itemizedlist>
<informaltable rules="all"> <informaltable rules="all">
<thead> <thead>
<tr> <tr>
@ -285,11 +308,6 @@
</tr> </tr>
</tbody> </tbody>
</informaltable> </informaltable>
<para>This array of options can be best divided into two
 those which result in running separate nova deployments
(cells and regions), and those which merely divide a
single deployment (<glossterm>availability
zone</glossterm>s and host aggregates).</para>
<?hard-pagebreak?> <?hard-pagebreak?>
<section xml:id="cells_regions"> <section xml:id="cells_regions">
<title>Cells and Regions</title> <title>Cells and Regions</title>

View File

@ -658,7 +658,7 @@ inner join nova.instances on cinder.volumes.instance_uuid=nova.instances.uuid
<para>If you find that you have reached or are reaching <para>If you find that you have reached or are reaching
the capacity limit of your computing resources, you the capacity limit of your computing resources, you
should plan to add additional compute nodes. Adding should plan to add additional compute nodes. Adding
more nodes is quite easy. The process for adding nodes more nodes is quite easy. The process for adding compute nodes
is the same as when the initial compute nodes were is the same as when the initial compute nodes were
deployed to your cloud: use an automated deployment deployed to your cloud: use an automated deployment
system to bootstrap the bare-metal server with the system to bootstrap the bare-metal server with the