diff --git a/doc/openstack-ops/ch_arch_scaling.xml b/doc/openstack-ops/ch_arch_scaling.xml
index 69d06ec6..5e24fb28 100644
--- a/doc/openstack-ops/ch_arch_scaling.xml
+++ b/doc/openstack-ops/ch_arch_scaling.xml
@@ -12,11 +12,16 @@
xml:id="scaling">
Scaling
- If your cloud is successful, eventually you must add
- resources to meet the increasing demand. OpenStack is designed
+ Where traditional applications required larger hardware to scale
+ ("vertical scaling"), cloud based applications typically request more,
+ discrete hardware ("horizontal scaling"). If your cloud is successful,
+ eventually you must add resources to meet the increasing demand.
+ To suit the cloud paradigm, OpenStack itself is designed
to be horizontally scalable. Rather than switching to larger
- servers, you procure more servers. Ideally, you scale out and
- load balance among functionally-identical services.
+ servers, you procure more servers and simply install identically
+ configured services. Ideally, you scale out and load balance among
+ groups of functionally-identical services (for example, "compute
+ nodes", "nova-api nodes"), which communicate on a message bus.
The Starting Point
Determining the scalability of your cloud and how to
@@ -26,10 +31,16 @@
metrics.
The starting point for most is the core count of your
cloud. By applying some ratios, you can gather information
- about the number of virtual machines (VMs) you expect to
- run ((overcommit fraction × cores) / virtual cores
- per instance)
, how much storage is required
+ about:
+
+ the number of virtual machines (VMs) you
+ expect to run
+ ((overcommit fraction × cores) / virtual cores per instance)
,
+
+ how much storage is required
(flavor disk size × number of instances)
.
+
+
You can use these ratios to determine how much additional
infrastructure you need to support your cloud.
The default OpenStack flavors are:
@@ -82,7 +93,7 @@
- Assume that the following set-up supports (200 / 2) × 16
+ The following set-up supports (200 / 2) × 16
= 1600 VM instances and requires 80 TB of storage for
/var/lib/nova/instances
:
@@ -131,9 +142,10 @@
performance (spindles/core), memory availability
(RAM/core), network bandwidth (Gbps/core), and overall CPU
performance (CPU/core).
- For which metrics to track to determine how to scale
- your cloud, see .
-
+ For further discussion of metric tracking, including
+ how to extract metrics from your cloud, see
+ .
+
@@ -181,8 +193,19 @@
your cloud: cells,
regions,
zones and host
- aggregates. Each method provides different
- functionality, as described in the following table:
+ aggregates.
+ Each method provides different functionality, and can be best
+ divided into two groups:
+
+
+ Cells and regions, which segregate an entire cloud and
+ result in running separate Compute deployments.
+
+
+ Availability zones and host
+ aggregates which merely divide a single Compute deployment.
+
+
@@ -285,12 +308,7 @@
- This array of options can be best divided into two
- — those which result in running separate nova deployments
- (cells and regions), and those which merely divide a
- single deployment (availability
- zones and host aggregates).
-
+
Cells and Regions
OpenStack Compute cells are designed to allow
diff --git a/doc/openstack-ops/ch_ops_maintenance.xml b/doc/openstack-ops/ch_ops_maintenance.xml
index f1b9ff45..5af481b5 100644
--- a/doc/openstack-ops/ch_ops_maintenance.xml
+++ b/doc/openstack-ops/ch_ops_maintenance.xml
@@ -658,7 +658,7 @@ inner join nova.instances on cinder.volumes.instance_uuid=nova.instances.uuid
If you find that you have reached or are reaching
the capacity limit of your computing resources, you
should plan to add additional compute nodes. Adding
- more nodes is quite easy. The process for adding nodes
+ more nodes is quite easy. The process for adding compute nodes
is the same as when the initial compute nodes were
deployed to your cloud: use an automated deployment
system to bootstrap the bare-metal server with the