Liberty release cruft updates

This patch updates the ops guide to remove some ancient cruft
and update "current" links to the Liberty release URLs. Includes:
* remove note on period tasks fixed in Grizzly
* IPv6 support now works in neutron
* global clustering is no longer a new feature
* glance quotas is no longer a new feature
* object quotas is no longer a new feature

Change-Id: Ic572e7872b65fd308227add2a1aafac262c435da
This commit is contained in:
Tom Fifield 2015-10-15 15:40:31 +08:00 committed by Andreas Jaeger
parent 8082838bfd
commit 3a5809a2b2
3 changed files with 15 additions and 40 deletions

View File

@ -169,7 +169,7 @@
<para>The best information available to support your choice is found on <para>The best information available to support your choice is found on
the <link xlink:href="http://docs.openstack.org/developer/nova/support-matrix.html" the <link xlink:href="http://docs.openstack.org/developer/nova/support-matrix.html"
xlink:title="reference manual">Hypervisor Support Matrix</link> and in the xlink:title="reference manual">Hypervisor Support Matrix</link> and in the
<link xlink:href="http://docs.openstack.org/juno/config-reference/content/section_compute-hypervisors.html" <link xlink:href="http://docs.openstack.org/liberty/config-reference/content/section_compute-hypervisors.html"
xlink:title="configuration reference">configuration xlink:title="configuration reference">configuration
reference</link>.</para> reference</link>.</para>

View File

@ -211,10 +211,7 @@
<section xml:id="adv-config-ipv6"> <section xml:id="adv-config-ipv6">
<title>Enabling IPv6 Support</title> <title>Enabling IPv6 Support</title>
<para>The Havana release with OpenStack Networking (neutron) does not <para>You can follow the progress being made on IPV6 support by
offer complete support of IPv6. Better support has been delivered in the
Kilo release, and will continue to improve in Liberty.
You can follow along the progress being made by
watching the <link xlink:href="https://wiki.openstack.org/wiki/Meetings/Neutron-IPv6-Subteam">neutron IPv6 watching the <link xlink:href="https://wiki.openstack.org/wiki/Meetings/Neutron-IPv6-Subteam">neutron IPv6
Subteam at work</link>.<indexterm class="singular"> Subteam at work</link>.<indexterm class="singular">
<primary>Liberty</primary> <primary>Liberty</primary>
@ -237,36 +234,18 @@
enabled cloud”</link>.</para> enabled cloud”</link>.</para>
</section> </section>
<section xml:id="specific-advanced-config-period-tasks">
<title>Periodic Task Frequency for Compute</title>
<para>Before the Grizzly release, the frequency of periodic tasks was
specified in seconds between runs. This meant that if the periodic task
took 30 minutes to run and the frequency was set to hourly, then the
periodic task actually ran every 90 minutes, because the task would wait
an hour after running before running again. This changed in Grizzly, and
we now time the frequency of periodic tasks from the start of the work
the task does. So, our 30 minute periodic task will run every hour, with
a 30 minute wait between the end of the first run and the start of the
next.<indexterm class="singular">
<primary>configuration options</primary>
<secondary>periodic task frequency</secondary>
</indexterm></para>
</section>
<section xml:id="adv-config-geography"> <section xml:id="adv-config-geography">
<title>Geographical Considerations for Object Storage</title> <title>Geographical Considerations for Object Storage</title>
<para>Enhanced support for global clustering of object storage servers <para>Support for global clustering of object storage servers
continues to be added since the Grizzly (1.8.0) release, when regions is available for all supported releases. You would implement these global
were introduced. You would implement these global clusters to ensure clusters to ensure replication across geographic areas in case of a
replication across geographic areas in case of a natural disaster and natural disaster and also to ensure that users can write or access their
also to ensure that users can write or access their objects more quickly objects more quickly based on the closest data center. You configure a
based on the closest data center. You configure a default region with default region with one zone for each cluster, but be sure your network
one zone for each cluster, but be sure your network (WAN) can handle the (WAN) can handle the additional request and response load between
additional request and response load between zones as you add more zones zones as you add more zones and build a ring that handles more zones.
and build a ring that handles more zones. Refer to <link Refer to <link
xlink:href="http://docs.openstack.org/developer/swift/admin_guide.html#geographically-distributed-clusters">Geographically Distributed xlink:href="http://docs.openstack.org/developer/swift/admin_guide.html#geographically-distributed-clusters">Geographically Distributed
Clusters</link> in the documentation for additional Clusters</link> in the documentation for additional
information.<indexterm class="singular"> information.<indexterm class="singular">

View File

@ -178,8 +178,7 @@
<section xml:id="set_image_quotas"> <section xml:id="set_image_quotas">
<title>Set Image Quotas</title> <title>Set Image Quotas</title>
<para>OpenStack Havana introduced a basic quota feature for the Image <para>You can restrict a project's image storage by total
service, so you can now restrict a project's image storage by total
number of bytes. Currently, this quota is applied cloud-wide, so if you number of bytes. Currently, this quota is applied cloud-wide, so if you
were to set an Image quota limit of 5 GB, then all projects in your were to set an Image quota limit of 5 GB, then all projects in your
cloud will be able to store only 5 GB of images and snapshots.<indexterm cloud will be able to store only 5 GB of images and snapshots.<indexterm
@ -201,14 +200,12 @@
<programlisting language="ini">user_storage_quota = 5368709120</programlisting> <programlisting language="ini">user_storage_quota = 5368709120</programlisting>
<note> <note>
<para>In the Icehouse release, there is a configuration option in <para>There is a configuration option in
<filename>glance-api.conf</filename> that limits the number of members <filename>glance-api.conf</filename> that limits the number of members
allowed per image, called <code>image_member_quota</code>, set to 128 allowed per image, called <code>image_member_quota</code>, set to 128
by default. That setting is a different quota from the storage by default. That setting is a different quota from the storage
quota.<indexterm class="singular"> quota.<indexterm class="singular">
<primary>Icehouse</primary> <primary>image quotas</primary>
<secondary>image quotas</secondary>
</indexterm></para> </indexterm></para>
</note> </note>
</section> </section>
@ -488,8 +485,7 @@
<section xml:id="cli_set_object_storage_quotas"> <section xml:id="cli_set_object_storage_quotas">
<title>Set Object Storage Quotas</title> <title>Set Object Storage Quotas</title>
<para>Object Storage quotas were introduced in Swift 1.8 (OpenStack <para>There are currently two categories of quotas for Object
Grizzly). There are currently two categories of quotas for Object
Storage:<indexterm class="singular"> Storage:<indexterm class="singular">
<primary>account quotas</primary> <primary>account quotas</primary>
</indexterm><indexterm class="singular"> </indexterm><indexterm class="singular">