OpenStack Proposal Bot fcc7393bd0 Imported Translations from Transifex
For more information about this automatic import see:
https://wiki.openstack.org/wiki/Translations/Infrastructure

Change-Id: I6530a51589e88a911cd9c37f38564776b31cb366
2015-06-15 06:00:39 +00:00

11383 lines
700 KiB
Plaintext
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"POT-Creation-Date: 2015-06-15 06:00+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/app_roadmaps.xml:45(None)
msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_ac01.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:10(title)
msgid "Working with Roadmaps"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:12(para)
msgid "The good news: OpenStack has unprecedented transparency when it comes to providing information about what's coming up. The bad news: each release moves very quickly. The purpose of this appendix is to highlight some of the useful pages to track, and take an educated guess at what is coming up in the Kilo release and perhaps further afield.<indexterm class=\"singular\"><primary>Kilo</primary><secondary>upcoming release of</secondary></indexterm><indexterm class=\"singular\"><primary>OpenStack community</primary><secondary>working with roadmaps</secondary><tertiary>release cycle</tertiary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:28(para)
msgid "OpenStack follows a six month release cycle, typically releasing in April/May and October/November each year. At the start of each cycle, the community gathers in a single location for a design summit. At the summit, the features for the coming releases are discussed, prioritized, and planned. <xref linkend=\"release-cycle-diagram\"/> shows an example release cycle, with dates showing milestone releases, code freeze, and string freeze dates, along with an example of when the summit occurs. Milestones are interim releases within the cycle that are available as packages for download and testing. Code freeze is putting a stop to adding new features to the release. String freeze is putting a stop to changing any strings within the source code."
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:41(title)
msgid "Release cycle diagram"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:51(title)
msgid "Information Available to You"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:53(para)
msgid "There are several good sources of information available that you can use to track your OpenStack development desires.<indexterm class=\"singular\"><primary>OpenStack community</primary><secondary>working with roadmaps</secondary><tertiary>information available</tertiary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:63(para)
msgid "Release notes are maintained on the OpenStack wiki, and also shown here:"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:69(th)
msgid "Series"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:71(th)
msgid "Status"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:73(th)
msgid "Releases"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:75(th)
msgid "Date"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:81(para)
msgid "Liberty"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:84(link)
msgid "Under Development"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:87(para)
msgid "2015.2"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:89(para)
msgid "Oct, 2015"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:93(para)
msgid "Kilo"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:96(link)
msgid "Current stable release, security-supported"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:100(link)
msgid "2015.1"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:102(para)
msgid "Apr 30, 2015"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:105(para)
msgid "Juno"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:107(link)
msgid "Security-supported"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:111(link)
msgid "2014.2"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:113(para)
msgid "Oct 16, 2014"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:117(para)
msgid "Icehouse"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:119(link) ./doc/openstack-ops/app_roadmaps.xml:151(para) ./doc/openstack-ops/app_roadmaps.xml:197(para) ./doc/openstack-ops/app_roadmaps.xml:246(para) ./doc/openstack-ops/app_roadmaps.xml:289(para)
msgid "End-of-life"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:122(link)
msgid "2014.1"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:124(para)
msgid "Apr 17, 2014"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:129(link)
msgid "2014.1.1"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:131(para)
msgid "Jun 9, 2014"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:136(link)
msgid "2014.1.2"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:138(para)
msgid "Aug 8, 2014"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:143(link)
msgid "2014.1.3"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:145(para)
msgid "Oct 2, 2014"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:149(para) ./doc/openstack-ops/section_arch_example-nova.xml:81(para) ./doc/openstack-ops/section_arch_example-neutron.xml:57(para)
msgid "Havana"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:153(link)
msgid "2013.2"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:156(para) ./doc/openstack-ops/app_roadmaps.xml:202(para)
msgid "Apr 4, 2013"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:162(link) ./doc/openstack-ops/app_roadmaps.xml:190(link)
msgid "2013.2.1"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:164(para) ./doc/openstack-ops/app_roadmaps.xml:192(para)
msgid "Dec 16, 2013"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:169(link)
msgid "2013.2.2"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:171(para)
msgid "Feb 13, 2014"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:176(link)
msgid "2013.2.3"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:178(para)
msgid "Apr 3, 2014"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:183(link)
msgid "2013.2.4"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:185(para)
msgid "Sep 22, 2014"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:195(para)
msgid "Grizzly"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:199(link)
msgid "2013.1"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:208(link)
msgid "2013.1.1"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:210(para)
msgid "May 9, 2013"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:216(link)
msgid "2013.1.2"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:218(para)
msgid "Jun 6, 2013"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:224(link)
msgid "2013.1.3"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:226(para)
msgid "Aug 8, 2013"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:232(link)
msgid "2013.1.4"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:234(para)
msgid "Oct 17, 2013"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:239(link)
msgid "2013.1.5"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:241(para)
msgid "Mar 20, 2015"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:244(para)
msgid "Folsom"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:248(link)
msgid "2012.2"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:251(para)
msgid "Sep 27, 2012"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:257(link)
msgid "2012.2.1"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:259(para)
msgid "Nov 29, 2012"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:265(link)
msgid "2012.2.2"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:267(para)
msgid "Dec 13, 2012"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:273(link)
msgid "2012.2.3"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:275(para)
msgid "Jan 31, 2013"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:281(link)
msgid "2012.2.4"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:283(para)
msgid "Apr 11, 2013"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:287(para)
msgid "Essex"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:291(link)
msgid "2012.1"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:294(para)
msgid "Apr 5, 2012"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:300(link)
msgid "2012.1.1"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:302(para)
msgid "Jun 22, 2012"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:308(link)
msgid "2012.1.2"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:310(para)
msgid "Aug 10, 2012"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:316(link)
msgid "2012.1.3"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:318(para)
msgid "Oct 12, 2012"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:322(para)
msgid "Diablo"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:324(para) ./doc/openstack-ops/app_roadmaps.xml:343(para) ./doc/openstack-ops/app_roadmaps.xml:354(para) ./doc/openstack-ops/app_roadmaps.xml:365(para)
msgid "Deprecated"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:326(link)
msgid "2011.3"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:329(para)
msgid "Sep 22, 2011"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:335(link)
msgid "2011.3.1"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:337(para)
msgid "Jan 19, 2012"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:341(para)
msgid "Cactus"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:345(link)
msgid "2011.2"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:348(para)
msgid "Apr 15, 2011"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:352(para)
msgid "Bexar"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:356(link)
msgid "2011.1"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:359(para)
msgid "Feb 3, 2011"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:363(para)
msgid "Austin"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:367(link)
msgid "2010.1"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:370(para)
msgid "Oct 21, 2010"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:375(para)
msgid "Here are some other resources:"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:379(link)
msgid "A breakdown of current features under development, with their target milestone"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:385(link)
msgid "A list of all features, including those not yet under development"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:390(link)
msgid "Rough-draft design discussions (\"etherpads\") from the last design summit"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:395(link)
msgid "List of individual code changes under review"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:402(title)
msgid "Influencing the Roadmap"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:404(para)
msgid "OpenStack truly welcomes your ideas (and contributions) and highly values feedback from real-world users of the software. By learning a little about the process that drives feature development, you can participate and perhaps get the additions you desire.<indexterm class=\"singular\"><primary>OpenStack community</primary><secondary>working with roadmaps</secondary><tertiary>influencing</tertiary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:416(para)
msgid "Feature requests typically start their life in Etherpad, a collaborative editing tool, which is used to take coordinating notes at a design summit session specific to the feature. This then leads to the creation of a blueprint on the Launchpad site for the particular project, which is used to describe the feature more formally. Blueprints are then approved by project team members, and development can begin."
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:423(para)
msgid "Therefore, the fastest way to get your feature request up for consideration is to create an Etherpad with your ideas and propose a session to the design summit. If the design summit has already passed, you may also create a blueprint directly. Read this <link href=\"http://vmartinezdelacruz.com/how-to-work-with-blueprints-without-losing-your-mind/\">blog post about how to work with blueprints</link> the perspective of Victoria Martínez, a developer intern."
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:431(para)
msgid "The roadmap for the next release as it is developed can be seen at <link href=\"http://status.openstack.org/release/\">Releases</link>."
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:434(para)
msgid "To determine the potential features going in to future releases, or to look at features implemented previously, take a look at the existing blueprints such as <link href=\"https://blueprints.launchpad.net/nova\">OpenStack Compute (nova) Blueprints</link>, <link href=\"https://blueprints.launchpad.net/keystone\">OpenStack Identity (keystone) Blueprints</link>, and release notes."
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:441(para)
msgid "Aside from the direct-to-blueprint pathway, there is another very well-regarded mechanism to influence the development roadmap: the user survey. Found at <link href=\"http://openstack.org/user-survey\"/>, it allows you to provide details of your deployments and needs, anonymously by default. Each cycle, the user committee analyzes the results and produces a report, including providing specific information to the technical committee and technical leads of the projects."
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:452(title)
msgid "Aspects to Watch"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:454(para)
msgid "You want to keep an eye on the areas improving within OpenStack. The best way to \"watch\" roadmaps for each project is to look at the blueprints that are being approved for work on milestone releases. You can also learn from PTL webinars that follow the OpenStack summits twice a year.<indexterm class=\"startofrange\" xml:id=\"OSaspect\"><primary>OpenStack community</primary><secondary>working with roadmaps</secondary><tertiary>aspects to watch</tertiary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:467(title)
msgid "Driver Quality Improvements"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:469(para)
msgid "A major quality push has occurred across drivers and plug-ins in Block Storage, Compute, and Networking. Particularly, developers of Compute and Networking drivers that require proprietary or hardware products are now required to provide an automated external testing system for use during the development process."
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:477(title)
msgid "Easier Upgrades"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:479(para)
msgid "One of the most requested features since OpenStack began (for components other than Object Storage, which tends to \"just work\"): easier upgrades. In all recent releases internal messaging communication is versioned, meaning services can theoretically drop back to backward-compatible behavior. This allows you to run later versions of some components, while keeping older versions of others."
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:486(para)
msgid "In addition, database migrations are now tested with the Turbo Hipster tool. This tool tests database migration performance on copies of real-world user databases."
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:490(para)
msgid "These changes have facilitated the first proper OpenStack upgrade guide, found in <xref linkend=\"ch_ops_upgrades\"/>, and will continue to improve in Kilo.<indexterm class=\"singular\"><primary>Kilo</primary><secondary>upgrades in</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:500(title)
msgid "Deprecation of Nova Network"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:502(para)
msgid "With the introduction of the full software-defined networking stack provided by OpenStack Networking (neutron) in the Folsom release, development effort on the initial networking code that remains part of the Compute component has gradually lessened. While many still use <literal>nova-network</literal> in production, there has been a long-term plan to remove the code in favor of the more flexible and full-featured OpenStack Networking.<indexterm class=\"singular\"><primary>nova</primary><secondary>deprecation of</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:514(para)
msgid "An attempt was made to deprecate <literal>nova-network</literal> during the Havana release, which was aborted due to the lack of equivalent functionality (such as the FlatDHCP multi-host high-availability mode mentioned in this guide), lack of a migration path between versions, insufficient testing, and simplicity when used for the more straightforward use cases <literal>nova-network</literal> traditionally supported. Though significant effort has been made to address these concerns, <literal>nova-network</literal> was not be deprecated in the Juno release. In addition, to a limited degree, patches to <literal>nova-network</literal> have again begin to be accepted, such as adding a per-network settings feature and SR-IOV support in Juno.<indexterm class=\"singular\"><primary>Juno</primary><secondary>nova network deprecation</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:531(para)
msgid "This leaves you with an important point of decision when designing your cloud. OpenStack Networking is robust enough to use with a small number of limitations (performance issues in some scenarios, only basic high availability of layer 3 systems) and provides many more features than <literal>nova-network</literal>. However, if you do not have the more complex use cases that can benefit from fuller software-defined networking capabilities, or are uncomfortable with the new concepts introduced, <literal>nova-network</literal> may continue to be a viable option for the next 12 months."
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:541(para)
msgid "Similarly, if you have an existing cloud and are looking to upgrade from <literal>nova-network</literal> to OpenStack Networking, you should have the option to delay the upgrade for this period of time. However, each release of OpenStack brings significant new innovation, and regardless of your use of networking methodology, it is likely best to begin planning for an upgrade within a reasonable timeframe of each release."
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:549(para)
msgid "As mentioned, there's currently no way to cleanly migrate from <literal>nova-network</literal> to neutron. We recommend that you keep a migration in mind and what that process might involve for when a proper migration path is released. Current thinking is that upgrades without instance downtime will be available in the Kilo release."
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:558(title)
msgid "Distributed Virtual Router"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:560(para)
msgid "One of the long-time complaints surrounding OpenStack Networking was the lack of high availability for the layer 3 components. The Juno release introduced Distributed Virtual Router (DVR), which aims to solve this problem."
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:564(para)
msgid "Early indications are that it does do this well for a base set of scenarios, such as using the ML2 plug-in with Open vSwitch, one flat external network and VXLAN tenant networks. However, it does appear that there are problems with the use of VLANs, IPv6, Floating IPs, high north-south traffic scenarios and large numbers of compute nodes. It is expected these will improve significantly with the Kilo release, but bug reports on specific issues are highly desirable."
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:575(title)
msgid "Replacement of Open vSwitch Plug-in with <phrase role=\"keep-together\">Modular Layer 2</phrase>"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:578(para)
msgid "The Modular Layer 2 plug-in is a framework allowing OpenStack Networking to simultaneously utilize the variety of layer-2 networking technologies found in complex real-world data centers. It currently works with the existing Open vSwitch, Linux Bridge, and Hyper-V L2 agents and is intended to replace and deprecate the monolithic plug-ins associated with those L2 agents."
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:587(title)
msgid "New API Versions"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:589(para)
msgid "The third version of the Compute API was broadly discussed and worked on during the Havana and Icehouse release cycles. Current discussions indicate that the V2 API will remain for many releases, and the next iteration of the API will be denoted v2.1 and have similar properties to the existing v2.0, rather than an entirely new v3 API. This is a great time to evaluate all API and provide comments while the next generation APIs are being defined. A new working group was formed specifically to <link href=\"https://wiki.openstack.org/wiki/API_Working_Group\">improve OpenStack APIs</link> and create design guidelines, which you are welcome to join. <indexterm class=\"singular\"><primary>Kilo</primary><secondary>Compute V3 API</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:606(title)
msgid "OpenStack on OpenStack (TripleO)"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:608(para)
msgid "This project continues to improve and you may consider using it for greenfield <phrase role=\"keep-together\">deployments</phrase>, though according to the latest user survey results it remains to see widespread uptake."
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:615(title)
msgid "Data Processing (Sahara)"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:617(para)
msgid "A much-requested answer to big data problems, a dedicated team has been making solid progress on a Hadoop-as-a-Service project."
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:622(title)
msgid "Bare-Metal Deployment (Ironic)"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:624(para)
msgid "The bare-metal deployment has been widely lauded, and development continues. The Juno release brought the Ironic drive into the Compute project, and it is aimed to deprecate the existing bare-metal driver in Kilo. If you are a current user of the bare metal driver, a particular blueprint to follow is <link href=\"https://blueprints.launchpad.net/nova/+spec/deprecate-baremetal-driver\"> Deprecate the Baremetal driver</link>.<indexterm class=\"singular\"><primary>Kilo</primary><secondary>Compute bare-metal deployment</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:638(title)
msgid "Database as a Service (Trove)"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:640(para)
msgid "The OpenStack community has had a database-as-a-service tool in development for some time, and we saw the first integrated release of it in Icehouse. From release it was able to deploy database servers out of the box in a highly available way, initially supporting only MySQL. Juno introduced support for Mongo (including clustering), PostgreSQL and Couchbase, in addition to replication functionality for MySQL. In Kilo, more advanced clustering capability was delivered, in addition to better integration with other OpenStack components such as networking. <indexterm class=\"singular\"><primary>Juno</primary><secondary>database-as-a-service tool</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:656(title)
msgid "Messaging as a Service (Zaqar)"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:658(para)
msgid "A service to provide queues of messages and notifications has entered “incubation,” meaning if the upcoming development cycles are successful, it will be released in <phrase role=\"keep-together\">Kilo</phrase>."
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:665(title)
msgid "DNS as a Service (Designate)"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:667(para)
msgid "A long requested service, to provide the ability to manipulate DNS entries associated with OpenStack resources has gathered a following. The Designate project has entered “incubation,” meaning if the upcoming development cycles are successful, it could be released in <phrase role=\"keep-together\">Kilo</phrase>."
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:674(title)
msgid "Scheduler Improvements"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:676(para)
msgid "Both Compute and Block Storage rely on schedulers to determine where to place virtual machines or volumes. In Havana, the Compute scheduler underwent significant improvement, while in Icehouse it was the scheduler in Block Storage that received a boost. Further down the track, an effort started this cycle that aims to create a holistic scheduler covering both will come to fruition. Some of this work done in Kilo can be found under the <link href=\"https://wiki.openstack.org/wiki/Gantt/kilo\">Gantt project</link>.<indexterm class=\"singular\"><primary>Kilo</primary><secondary>scheduler improvements</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:690(title)
msgid "Block Storage Improvements"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:692(para)
msgid "Block Storage is now considered a stable project, with wide uptake and a long track record of quality drivers. The team still discussed many areas of work at the Kilo summit, including better error reporting, automated discovery, and thin provisioning features."
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:700(title)
msgid "Toward a Python SDK"
msgstr ""
#: ./doc/openstack-ops/app_roadmaps.xml:702(para)
msgid "Though many successfully use the various python-*client code as an effective SDK for interacting with OpenStack, consistency between the projects and documentation availability waxes and wanes. To combat this, an <link href=\"https://wiki.openstack.org/wiki/PythonOpenStackSDK\">effort to improve the experience</link> has started. Cross-project development efforts in OpenStack have a checkered history, such as the <link href=\"https://wiki.openstack.org/wiki/OpenStackClient\"> unified client project</link> having several false starts. However, the early signs for the SDK project are promising, and we expect to see results during the Juno cycle.<indexterm class=\"endofrange\" startref=\"OSaspect\"/>"
msgstr ""
#: ./doc/openstack-ops/bk_ops_guide.xml:16(title)
msgid "OpenStack Operations Guide"
msgstr ""
#: ./doc/openstack-ops/bk_ops_guide.xml:18(titleabbrev)
msgid "OpenStack Ops Guide"
msgstr ""
#: ./doc/openstack-ops/bk_ops_guide.xml:26(orgname) ./doc/openstack-ops/bk_ops_guide.xml:32(holder)
msgid "OpenStack Foundation"
msgstr ""
#: ./doc/openstack-ops/bk_ops_guide.xml:31(year)
msgid "2014"
msgstr ""
#: ./doc/openstack-ops/bk_ops_guide.xml:34(productname) ./doc/openstack-ops/ch_ops_resources.xml:13(title)
msgid "OpenStack"
msgstr ""
#: ./doc/openstack-ops/bk_ops_guide.xml:38(remark)
msgid "Copyright details are filled in by the template."
msgstr ""
#: ./doc/openstack-ops/bk_ops_guide.xml:43(para)
msgid "This book provides information about designing and operating OpenStack clouds."
msgstr ""
#: ./doc/openstack-ops/part_operations.xml:9(title)
msgid "Operations"
msgstr ""
#: ./doc/openstack-ops/part_operations.xml:12(para)
msgid "Congratulations! By now, you should have a solid design for your cloud. We now recommend that you turn to the OpenStack Installation Guide (<link href=\"http://docs.openstack.org/havana/install-guide/install/apt/\"/> for Ubuntu, for example), which contains a step-by-step guide on how to manually install the OpenStack packages and dependencies on your cloud."
msgstr ""
#: ./doc/openstack-ops/part_operations.xml:18(para)
msgid "While it is important for an operator to be familiar with the steps involved in deploying OpenStack, we also strongly encourage you to evaluate configuration-management tools, such as <glossterm>Puppet</glossterm> or <glossterm>Chef</glossterm>, which can help automate this deployment process.<indexterm class=\"singular\"><primary>Chef</primary></indexterm><indexterm class=\"singular\"><primary>Puppet</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/part_operations.xml:28(para)
msgid "In the remainder of this guide, we assume that you have successfully deployed an OpenStack cloud and are able to perform basic operations such as adding images, booting instances, and attaching volumes."
msgstr ""
#: ./doc/openstack-ops/part_operations.xml:32(para)
msgid "As your focus turns to stable operations, we recommend that you do skim the remainder of this book to get a sense of the content. Some of this content is useful to read in advance so that you can put best practices into effect to simplify your life in the long run. Other content is more useful as a reference that you might turn to when an unexpected event occurs (such as a power failure), or to troubleshoot a particular problem."
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:88(None)
msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_1201.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:207(None)
msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_1202.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:12(title)
msgid "Network Troubleshooting"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:14(para)
msgid "Network troubleshooting can unfortunately be a very difficult and confusing procedure. A network issue can cause a problem at several points in the cloud. Using a logical troubleshooting procedure can help mitigate the confusion and more quickly isolate where exactly the network issue is. This chapter aims to give you the information you need to identify any issues for either <literal>nova-network</literal> or OpenStack Networking (neutron) with Linux Bridge or Open vSwitch.<indexterm class=\"singular\"><primary>OpenStack Networking (neutron)</primary><secondary>troubleshooting</secondary></indexterm><indexterm class=\"singular\"><primary>Linux Bridge</primary><secondary>troubleshooting</secondary></indexterm><indexterm class=\"singular\"><primary>network troubleshooting</primary><see>troubleshooting</see></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:35(title)
msgid "Using \"ip a\" to Check Interface States"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:37(para)
msgid "On compute nodes and nodes running <literal>nova-network</literal>, use the following command to see information about interfaces, including information about IPs, VLANs, and whether your interfaces are up:<indexterm class=\"singular\"><primary>ip a command</primary></indexterm><indexterm class=\"singular\"><primary>interface states, checking</primary></indexterm><indexterm class=\"singular\"><primary>troubleshooting</primary><secondary>checking interface states</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:52(para)
msgid "If you're encountering any sort of networking difficulty, one good initial sanity check is to make sure that your interfaces are up. For example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:66(para)
msgid "You can safely ignore the state of <literal>virbr0</literal>, which is a default bridge created by libvirt and not used by OpenStack."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:71(title)
msgid "Visualizing nova-network Traffic in the Cloud"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:73(para)
msgid "If you are logged in to an instance and ping an external host—for example, Google—the ping packet takes the route shown in <xref linkend=\"traffic-12-1\"/>.<indexterm class=\"singular\"><primary>ping packets</primary></indexterm><indexterm class=\"singular\"><primary>troubleshooting</primary><secondary>nova-network traffic</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:84(title)
msgid "Traffic route for ping packet"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:95(para)
msgid "The instance generates a packet and places it on the virtual Network Interface Card (NIC) inside the instance, such as <literal>eth0</literal>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:101(para)
msgid "The packet transfers to the virtual NIC of the compute host, such as, <literal>vnet1</literal>. You can find out what vnet NIC is being used by looking at the <filename>/etc/libvirt/qemu/instance-xxxxxxxx.xml</filename> file."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:109(para)
msgid "From the vnet NIC, the packet transfers to a bridge on the compute node, such as <code>br100</code>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:112(para)
msgid "If you run FlatDHCPManager, one bridge is on the compute node. If you run VlanManager, one bridge exists for each VLAN."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:115(para)
msgid "To see which bridge the packet will use, run the command: <placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:118(para)
msgid "Look for the vnet NIC. You can also reference <filename>nova.conf</filename> and look for the <code>flat_interface_bridge</code> option."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:124(para)
msgid "The packet transfers to the main NIC of the compute node. You can also see this NIC in the <literal>brctl</literal> output, or you can find it by referencing the <literal>flat_interface</literal> option in <filename>nova.conf</filename>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:131(para)
msgid "After the packet is on this NIC, it transfers to the compute node's default gateway. The packet is now most likely out of your control at this point. The diagram depicts an external gateway. However, in the default configuration with multi-host, the compute host is the gateway."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:139(para)
msgid "Reverse the direction to see the path of a ping reply. From this path, you can see that a single packet travels across four different NICs. If a problem occurs with any of these NICs, a network issue occurs."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:145(title)
msgid "Visualizing OpenStack Networking Service Traffic in the Cloud"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:148(para)
msgid "The OpenStack Networking Service, neutron, has many more degrees of freedom than <literal>nova-network</literal> does because of its pluggable backend. It can be configured with open source or vendor proprietary plug-ins that control software defined networking (SDN) hardware or plug-ins that use Linux native facilities on your hosts, such as Open vSwitch or Linux Bridge.<indexterm class=\"startofrange\" xml:id=\"Topen\"><primary>troubleshooting</primary><secondary>OpenStack traffic</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:159(para)
msgid "The networking chapter of the OpenStack <link href=\"http://docs.openstack.org/admin-guide-cloud/content/ch_networking.html\" title=\"Cloud Administrator Guide\">Cloud Administrator Guide</link> shows a variety of networking scenarios and their connection paths. The purpose of this section is to give you the tools to troubleshoot the various components involved however they are plumbed together in your environment."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:167(para)
msgid "For this example, we will use the Open vSwitch (OVS) backend. Other backend plug-ins will have very different flow paths. OVS is the most popularly deployed network driver, according to the October 2013 OpenStack User Survey, with 50 percent more sites using it than the second place Linux Bridge driver. We'll describe each step in turn, with <xref linkend=\"neutron-packet-ping\"/> for reference."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:176(para)
msgid "The instance generates a packet and places it on the virtual NIC inside the instance, such as eth0."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:181(para)
msgid "The packet transfers to a Test Access Point (TAP) device on the compute host, such as tap690466bc-92. You can find out what TAP is being used by looking at the <filename>/etc/libvirt/qemu/instance-xxxxxxxx.xml</filename> file."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:187(para)
msgid "The TAP device name is constructed using the first 11 characters of the port ID (10 hex digits plus an included '-'), so another means of finding the device name is to use the <literal>neutron</literal> command. This returns a pipe-delimited list, the first item of which is the port ID. For example, to get the port ID associated with IP address 10.0.0.10, do this:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:197(para)
msgid "Taking the first 11 characters, we can construct a device name of tapff387e54-9e from this output."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:203(title)
msgid "Neutron network paths"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:214(para)
msgid "The TAP device is connected to the integration bridge, <code>br-int</code>. This bridge connects all the instance TAP devices and any other bridges on the system. In this example, we have <code>int-br-eth1</code> and <code>patch-tun</code>. <code>int-br-eth1</code> is one half of a veth pair connecting to the bridge <code>br-eth1</code>, which handles VLAN networks trunked over the physical Ethernet device <code>eth1</code>. <code>patch-tun</code> is an Open vSwitch internal port that connects to the <code>br-tun</code> bridge for GRE networks."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:224(para)
msgid "The TAP devices and veth devices are normal Linux network devices and may be inspected with the usual tools, such as <literal>ip</literal> and <literal>tcpdump</literal>. Open vSwitch internal devices, such as <code>patch-tun</code>, are only visible within the Open vSwitch environment. If you try to run <literal>tcpdump -i patch-tun</literal>, it will raise an error, saying that the device does not exist."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:232(para)
msgid "It is possible to watch packets on internal interfaces, but it does take a little bit of networking gymnastics. First you need to create a dummy network device that normal Linux tools can see. Then you need to add it to the bridge containing the internal interface you want to snoop on. Finally, you need to tell Open vSwitch to mirror all traffic to or from the internal port onto this dummy port. After all this, you can then run <literal>tcpdump</literal> on the dummy interface and see the traffic on the internal port."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:242(title)
msgid "To capture packets from the <code>patch-tun</code> internal interface on integration bridge, <code>br-int</code>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:246(para)
msgid "Create and bring up a dummy interface, <code>snooper0</code>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:256(para)
msgid "Add device <code>snooper0</code> to bridge <code>br-int</code>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:264(para)
msgid "Create mirror of <code>patch-tun</code> to <code>snooper0</code> (returns UUID of mirror port):"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:274(para)
msgid "Profit. You can now see traffic on <code>patch-tun</code> by running <literal>tcpdump -i snooper0</literal>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:279(para)
msgid "Clean up by clearing all mirrors on <code>br-int</code> and deleting the dummy interface:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:291(para)
msgid "On the integration bridge, networks are distinguished using internal VLANs regardless of how the networking service defines them. This allows instances on the same host to communicate directly without transiting the rest of the virtual, or physical, network. These internal VLAN IDs are based on the order they are created on the node and may vary between nodes. These IDs are in no way related to the segmentation IDs used in the network definition and on the physical wire."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:300(para)
msgid "VLAN tags are translated between the external tag defined in the network settings, and internal tags in several places. On the <code>br-int</code>, incoming packets from the <code>int-br-eth1</code> are translated from external tags to internal tags. Other translations also happen on the other bridges and will be discussed in those sections."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:310(title)
msgid "To discover which internal VLAN tag is in use for a given external VLAN by using the <literal>ovs-ofctl</literal> command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:315(para)
msgid "Find the external VLAN tag of the network you're interested in. This is the <code>provider:segmentation_id</code> as returned by the networking service:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:330(para)
msgid "Grep for the <code>provider:segmentation_id</code>, 2113 in this case, in the output of <literal>ovs-ofctl dump-flows br-int</literal>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:340(para)
msgid "Here you can see packets received on port ID 1 with the VLAN tag 2113 are modified to have the internal VLAN tag 7. Digging a little deeper, you can confirm that port 1 is in fact <code>int-br-eth1</code>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:378(para)
msgid "The next step depends on whether the virtual network is configured to use 802.1q VLAN tags or GRE:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:383(para)
msgid "VLAN-based networks exit the integration bridge via veth interface <code>int-br-eth1</code> and arrive on the bridge <code>br-eth1</code> on the other member of the veth pair <code>phy-br-eth1</code>. Packets on this interface arrive with internal VLAN tags and are translated to external tags in the reverse of the process described above:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:395(para)
msgid "Packets, now tagged with the external VLAN tag, then exit onto the physical network via <code>eth1</code>. The Layer2 switch this interface is connected to must be configured to accept traffic with the VLAN ID used. The next hop for this packet must also be on the same layer-2 network."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:403(para)
msgid "GRE-based networks are passed with <code>patch-tun</code> to the tunnel bridge <code>br-tun</code> on interface <code>patch-int</code>. This bridge also contains one port for each GRE tunnel peer, so one for each compute node and network node in your network. The ports are named sequentially from <code>gre-1</code> onward."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:410(para)
msgid "Matching <code>gre-&lt;n&gt;</code> interfaces to tunnel endpoints is possible by looking at the Open vSwitch state:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:421(para)
msgid "In this case, <code>gre-1</code> is a tunnel from IP 10.10.128.21, which should match a local interface on this node, to IP 10.10.128.16 on the remote side."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:425(para)
msgid "These tunnels use the regular routing tables on the host to route the resulting GRE packet, so there is no requirement that GRE endpoints are all on the same layer-2 network, unlike VLAN encapsulation."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:430(para)
msgid "All interfaces on the <code>br-tun</code> are internal to Open vSwitch. To monitor traffic on them, you need to set up a mirror port as described above for <code>patch-tun</code> in the <code>br-int</code> bridge."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:435(para)
msgid "All translation of GRE tunnels to and from internal VLANs happens on this bridge."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:441(title)
msgid "To discover which internal VLAN tag is in use for a GRE tunnel by using the <literal>ovs-ofctl</literal> command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:445(para)
msgid "Find the <code>provider:segmentation_id</code> of the network you're interested in. This is the same field used for the VLAN ID in VLAN-based networks:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:460(para)
msgid "Grep for 0x&lt;<code>provider:segmentation_id</code>&gt;, 0x3 in this case, in the output of <literal>ovs-ofctl dump-flows br-tun</literal>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:487(para)
msgid "Here, you see three flows related to this GRE tunnel. The first is the translation from inbound packets with this tunnel ID to internal VLAN ID 1. The second shows a unicast flow to output port 53 for packets destined for MAC address fa:16:3e:a6:48:24. The third shows the translation from the internal VLAN representation to the GRE tunnel ID flooded to all output ports. For further details of the flow descriptions, see the man page for <literal>ovs-ofctl</literal>. As in the previous VLAN example, numeric port IDs can be matched with their named representations by examining the output of <literal>ovs-ofctl show br-tun</literal>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:503(para)
msgid "The packet is then received on the network node. Note that any traffic to the l3-agent or dhcp-agent will be visible only within their network namespace. Watching any interfaces outside those namespaces, even those that carry the network traffic, will only show broadcast packets like Address Resolution Protocols (ARPs), but unicast traffic to the router or DHCP address will not be seen. See <link href=\"http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html#dealing_with_netns\">Dealing with Network Namespaces</link> for detail on how to run commands within these namespaces."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:514(para)
msgid "Alternatively, it is possible to configure VLAN-based networks to use external routers rather than the l3-agent shown here, so long as the external router is on the same VLAN:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:520(para)
msgid "VLAN-based networks are received as tagged packets on a physical network interface, <code>eth1</code> in this example. Just as on the compute node, this interface is a member of the <code>br-eth1</code> bridge."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:527(para)
msgid "GRE-based networks will be passed to the tunnel bridge <code>br-tun</code>, which behaves just like the GRE interfaces on the compute node."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:535(para)
msgid "Next, the packets from either input go through the integration bridge, again just as on the compute node."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:540(para)
msgid "The packet then makes it to the l3-agent. This is actually another TAP device within the router's network namespace. Router namespaces are named in the form <code>qrouter-&lt;router-uuid&gt;</code>. Running <literal>ip a</literal> within the namespace will show the TAP device name, qr-e6256f7d-31 in this example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:557(para)
msgid "The <code>qg-&lt;n&gt;</code> interface in the l3-agent router namespace sends the packet on to its next hop through device <code>eth2</code> on the external bridge <code>br-ex</code>. This bridge is constructed similarly to <code>br-eth1</code> and may be inspected in the same way."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:565(para)
msgid "This external bridge also includes a physical network interface, <code>eth2</code> in this example, which finally lands the packet on the external network destined for an external router or destination."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:572(para)
msgid "DHCP agents running on OpenStack networks run in namespaces similar to the l3-agents. DHCP namespaces are named <code>qdhcp-&lt;uuid&gt;</code> and have a TAP device on the integration bridge. Debugging of DHCP issues usually involves working inside this network namespace.<indexterm class=\"endofrange\" startref=\"Topen\"/>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:583(title)
msgid "Finding a Failure in the Path"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:585(para)
msgid "Use ping to quickly find where a failure exists in the network path. In an instance, first see whether you can ping an external host, such as google.com. If you can, then there shouldn't be a network problem at all."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:590(para)
msgid "If you can't, try pinging the IP address of the compute node where the instance is hosted. If you can ping this IP, then the problem is somewhere between the compute node and that compute node's gateway."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:594(para)
msgid "If you can't ping the IP address of the compute node, the problem is between the instance and the compute node. This includes the bridge connecting the compute node's main NIC with the vnet NIC of the instance."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:599(para)
msgid "One last test is to launch a second instance and see whether the two instances can ping each other. If they can, the issue might be related to the firewall on the compute node.<indexterm class=\"singular\"><primary>path failures</primary></indexterm><indexterm class=\"singular\"><primary>troubleshooting</primary><secondary>detecting path failures</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:611(title) ./doc/openstack-ops/ch_ops_resources.xml:72(code)
msgid "tcpdump"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:613(para)
msgid "One great, although very in-depth, way of troubleshooting network issues is to use <literal>tcpdump</literal>. We recommended using <literal>tcpdump</literal> at several points along the network path to correlate where a problem might be. If you prefer working with a GUI, either live or by using a <literal>tcpdump</literal> capture, do also check out <link href=\"http://www.wireshark.org/\" title=\"Wireshark\">Wireshark</link>.<indexterm class=\"singular\"><primary>tcpdump</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:623(para)
msgid "For example, run the following command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:628(para)
msgid "Run this on the command line of the following areas:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:632(para)
msgid "An external server outside of the cloud"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:636(para)
msgid "A compute node"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:640(para)
msgid "An instance running on that compute node"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:644(para)
msgid "In this example, these locations have the following IP addresses:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:656(para)
msgid "Next, open a new shell to the instance and then ping the external host where <literal>tcpdump</literal> is running. If the network path to the external server and back is fully functional, you see something like the following:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:661(para)
msgid "On the external server:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:671(para)
msgid "On the compute node:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:692(para)
msgid "On the instance:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:698(para)
msgid "Here, the external server received the ping request and sent a ping reply. On the compute node, you can see that both the ping and ping reply successfully passed through. You might also see duplicate packets on the compute node, as seen above, because <literal>tcpdump</literal> captured the packet on both the bridge and outgoing interface."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:706(title)
msgid "iptables"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:708(para)
msgid "Through <literal>nova-network</literal> or <literal>neutron</literal>, OpenStack Compute automatically manages iptables, including forwarding packets to and from instances on a compute node, forwarding floating IP traffic, and managing security group rules. In addition to managing the rules, comments (if supported) will be inserted in the rules to help indicate the purpose of the rule. <indexterm class=\"singular\"><primary>iptables</primary></indexterm><indexterm class=\"singular\"><primary>troubleshooting</primary><secondary>iptables</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:722(para)
msgid "The following comments are added to the rule set as appropriate:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:726(para)
msgid "Perform source NAT on outgoing traffic."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:729(para)
msgid "Default drop rule for unmatched traffic."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:732(para)
msgid "Direct traffic from the VM interface to the security group chain."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:736(para)
msgid "Jump to the VM specific chain."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:739(para)
msgid "Direct incoming traffic from VM to the security group chain."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:743(para)
msgid "Allow traffic from defined IP/MAC pairs."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:746(para)
msgid "Drop traffic without an IP/MAC allow rule."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:749(para)
msgid "Allow DHCP client traffic."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:752(para)
msgid "Prevent DHCP Spoofing by VM."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:755(para)
msgid "Send unmatched traffic to the fallback chain."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:758(para)
msgid "Drop packets that are not associated with a state."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:761(para)
msgid "Direct packets associated with a known session to the RETURN chain."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:765(para)
msgid "Allow IPv6 ICMP traffic to allow RA packets."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:769(para)
msgid "Run the following command to view the current iptables configuration:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:775(para)
msgid "If you modify the configuration, it reverts the next time you restart <literal>nova-network</literal> or <literal>neutron-server</literal>. You must use OpenStack to manage iptables."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:783(title)
msgid "Network Configuration in the Database for nova-network"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:785(para)
msgid "With <literal>nova-network</literal>, the nova database table contains a few tables with networking information:<indexterm class=\"singular\"><primary>databases</primary><secondary>nova-network troubleshooting</secondary></indexterm><indexterm class=\"singular\"><primary>troubleshooting</primary><secondary>nova-network database</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:799(literal)
msgid "fixed_ips"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:802(para)
msgid "Contains each possible IP address for the subnet(s) added to Compute. This table is related to the <literal>instances</literal> table by way of the <literal>fixed_ips.instance_uuid</literal> column."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:810(literal)
msgid "floating_ips"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:813(para)
msgid "Contains each floating IP address that was added to Compute. This table is related to the <literal>fixed_ips</literal> table by way of the <literal>floating_ips.fixed_ip_id</literal> column."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:821(literal) ./doc/openstack-ops/ch_ops_projects_users.xml:300(systemitem)
msgid "instances"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:824(para)
msgid "Not entirely network specific, but it contains information about the instance that is utilizing the <literal>fixed_ip</literal> and optional <literal>floating_ip</literal>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:831(para)
msgid "From these tables, you can see that a floating IP is technically never directly related to an instance; it must always go through a fixed IP."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:836(title)
msgid "Manually Disassociating a Floating IP"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:838(para)
msgid "Sometimes an instance is terminated but the floating IP was not correctly de-associated from that instance. Because the database is in an inconsistent state, the usual tools to disassociate the IP no longer work. To fix this, you must manually update the database.<indexterm class=\"singular\"><primary>IP addresses</primary><secondary>floating</secondary></indexterm><indexterm class=\"singular\"><primary>floating IP address</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:850(para)
msgid "First, find the UUID of the instance in question:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:854(para)
msgid "Next, find the fixed IP entry for that UUID:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:858(para)
msgid "You can now get the related floating IP entry:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:862(para)
msgid "And finally, you can disassociate the floating IP:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:867(para)
msgid "You can optionally also deallocate the IP from the user's pool:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:876(title)
msgid "Debugging DHCP Issues with nova-network"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:878(para)
msgid "One common networking problem is that an instance boots successfully but is not reachable because it failed to obtain an IP address from dnsmasq, which is the DHCP server that is launched by the <literal>nova-network</literal> service.<indexterm class=\"singular\"><primary>DHCP (Dynamic Host Configuration Protocol)</primary><secondary>debugging</secondary></indexterm><indexterm class=\"singular\"><primary>troubleshooting</primary><secondary>nova-network DHCP</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:891(para)
msgid "The simplest way to identify that this is the problem with your instance is to look at the console output of your instance. If DHCP failed, you can retrieve the console log by doing:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:897(para)
msgid "If your instance failed to obtain an IP through DHCP, some messages should appear in the console. For example, for the Cirros image, you see output that looks like the following:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:911(para)
msgid "After you establish that the instance booted properly, the task is to figure out where the failure is."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:914(para)
msgid "A DHCP problem might be caused by a misbehaving dnsmasq process. First, debug by checking logs and then restart the dnsmasq processes only for that project (tenant). In VLAN mode, there is a dnsmasq process for each tenant. Once you have restarted targeted dnsmasq processes, the simplest way to rule out dnsmasq causes is to kill all of the dnsmasq processes on the machine and restart <literal>nova-network</literal>. As a last resort, do this as root:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:926(para)
msgid "Use <literal>openstack-nova-network</literal> on RHEL/CentOS/Fedora but <literal>nova-network</literal> on Ubuntu/Debian."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:931(para)
msgid "Several minutes after <literal>nova-network</literal> is restarted, you should see new dnsmasq processes running:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:955(para)
msgid "If your instances are still not able to obtain IP addresses, the next thing to check is whether dnsmasq is seeing the DHCP requests from the instance. On the machine that is running the dnsmasq process, which is the compute host if running in multi-host mode, look at <literal>/var/log/syslog</literal> to see the dnsmasq output. If dnsmasq is seeing the request properly and handing out an IP, the output looks like this:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:971(para)
msgid "If you do not see the <literal>DHCPDISCOVER</literal>, a problem exists with the packet getting from the instance to the machine running dnsmasq. If you see all of the preceding output and your instances are still not able to obtain IP addresses, then the packet is able to get from the instance to the host running dnsmasq, but it is not able to make the return trip."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:978(para)
msgid "You might also see a message such as this:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:983(para)
msgid "This may be a dnsmasq and/or <literal>nova-network</literal> related issue. (For the preceding example, the problem happened to be that dnsmasq did not have any more IP addresses to give away because there were no more fixed IPs available in the OpenStack Compute database.)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:988(para)
msgid "If there's a suspicious-looking dnsmasq log message, take a look at the command-line arguments to the dnsmasq processes to see if they look correct:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:994(para)
msgid "The output looks something like the following:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1023(para)
msgid "The output shows three different dnsmasq processes. The dnsmasq process that has the DHCP subnet range of 192.168.122.0 belongs to libvirt and can be ignored. The other two dnsmasq processes belong to <literal>nova-network</literal>. The two processes are actually related—one is simply the parent process of the other. The arguments of the dnsmasq processes should correspond to the details you configured <literal>nova-network</literal> with."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1031(para)
msgid "If the problem does not seem to be related to dnsmasq itself, at this point use <code>tcpdump</code> on the interfaces to determine where the packets are getting lost."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1035(para)
msgid "DHCP traffic uses UDP. The client sends from port 68 to port 67 on the server. Try to boot a new instance and then systematically listen on the NICs until you identify the one that isn't seeing the traffic. To use <code>tcpdump</code> to listen to ports 67 and 68 on br100, you would do:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1043(para)
msgid "You should be doing sanity checks on the interfaces using command such as <code>ip a</code> and <code>brctl show</code> to ensure that the interfaces are actually up and configured the way that you think that they are."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1050(title)
msgid "Debugging DNS Issues"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1052(para)
msgid "If you are able to use SSH to log into an instance, but it takes a very long time (on the order of a minute) to get a prompt, then you might have a DNS issue. The reason a DNS issue can cause this problem is that the SSH server does a reverse DNS lookup on the IP address that you are connecting from. If DNS lookup isn't working on your instances, then you must wait for the DNS reverse lookup timeout to occur for the SSH login process to complete.<indexterm class=\"singular\"><primary>DNS (Domain Name Server, Service or System)</primary><secondary>debugging</secondary></indexterm><indexterm class=\"singular\"><primary>troubleshooting</primary><secondary>DNS issues</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1068(para)
msgid "When debugging DNS issues, start by making sure that the host where the dnsmasq process for that instance runs is able to correctly resolve. If the host cannot resolve, then the instances won't be able to either."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1073(para)
msgid "A quick way to check whether DNS is working is to resolve a hostname inside your instance by using the <code>host</code> command. If DNS is working, you should see:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1082(para)
msgid "If you're running the Cirros image, it doesn't have the \"host\" program installed, in which case you can use ping to try to access a machine by hostname to see whether it resolves. If DNS is working, the first line of ping would be:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1090(para)
msgid "If the instance fails to resolve the hostname, you have a DNS problem. For example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1096(para)
msgid "In an OpenStack cloud, the dnsmasq process acts as the DNS server for the instances in addition to acting as the DHCP server. A misbehaving dnsmasq process may be the source of DNS-related issues inside the instance. As mentioned in the previous section, the simplest way to rule out a misbehaving dnsmasq process is to kill all the dnsmasq processes on the machine and restart <literal>nova-network</literal>. However, be aware that this command affects everyone running instances on this node, including tenants that have not seen the issue. As a last resort, as root:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1109(para)
msgid "After the dnsmasq processes start again, check whether DNS is working."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1112(para)
msgid "If restarting the dnsmasq process doesn't fix the issue, you might need to use <code>tcpdump</code> to look at the packets to trace where the failure is. The DNS server listens on UDP port 53. You should see the DNS request on the bridge (such as, br100) of your compute node. Let's say you start listening with <code>tcpdump</code> on the compute node:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1122(para)
msgid "Then, if you use SSH to log into your instance and try <code>ping openstack.org</code>, you should see something like:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1135(title)
msgid "Troubleshooting Open vSwitch"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1137(para)
msgid "Open vSwitch, as used in the previous OpenStack Networking Service examples is a full-featured multilayer virtual switch licensed under the open source Apache 2.0 license. Full documentation can be found at <link href=\"http://openvswitch.org/\">the project's website</link>. In practice, given the preceding configuration, the most common issues are being sure that the required bridges (<code>br-int</code>, <code>br-tun</code>, and <code>br-ex</code>) exist and have the proper ports connected to them.<indexterm class=\"singular\"><primary>Open vSwitch</primary><secondary>troubleshooting</secondary></indexterm><indexterm class=\"singular\"><primary>troubleshooting</primary><secondary>Open vSwitch</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1154(para)
msgid "The Open vSwitch driver should and usually does manage this automatically, but it is useful to know how to do this by hand with the <literal>ovs-vsctl</literal> command. This command has many more subcommands than we will use here; see the man page or use <literal>ovs-vsctl --help</literal> for the full listing."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1160(para)
msgid "To list the bridges on a system, use <literal>ovs-vsctl list-br</literal>. This example shows a compute node that has an internal bridge and a tunnel bridge. VLAN networks are trunked through the <code>eth1</code> network interface:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1171(para)
msgid "Working from the physical interface inwards, we can see the chain of ports and bridges. First, the bridge <code>eth1-br</code>, which contains the physical network interface <literal>eth1</literal> and the virtual interface <code>phy-eth1-br</code>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1181(para)
msgid "Next, the internal bridge, <code>br-int</code>, contains <code>int-eth1-br</code>, which pairs with <code>phy-eth1-br</code> to connect to the physical network shown in the previous bridge, <code>patch-tun</code>, which is used to connect to the GRE tunnel bridge and the TAP devices that connect to the instances currently running on the system:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1196(para)
msgid "The tunnel bridge, <code>br-tun</code>, contains the <code>patch-int</code> interface and <code>gre-&lt;N&gt;</code> interfaces for each peer it connects to via GRE, one for each compute and network node in your cluster:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1210(para)
msgid "If any of these links is missing or incorrect, it suggests a configuration error. Bridges can be added with <literal>ovs-vsctl add-br</literal>, and ports can be added to bridges with <literal>ovs-vsctl add-port</literal>. While running these by hand can be useful debugging, it is imperative that manual changes that you intend to keep be reflected back into your configuration files."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1219(title)
msgid "Dealing with Network Namespaces"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1221(para)
msgid "Linux network namespaces are a kernel feature the networking service uses to support multiple isolated layer-2 networks with overlapping IP address ranges. The support may be disabled, but it is on by default. If it is enabled in your environment, your network nodes will run their dhcp-agents and l3-agents in isolated namespaces. Network interfaces and traffic on those interfaces will not be visible in the default namespace.<indexterm class=\"singular\"><primary>network namespaces, troubleshooting</primary></indexterm><indexterm class=\"singular\"><primary>namespaces, troubleshooting</primary></indexterm><indexterm class=\"singular\"><primary>troubleshooting</primary><secondary>network namespaces</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1237(para)
msgid "To see whether you are using namespaces, run <literal>ip netns</literal>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1248(para)
msgid "L3-agent router namespaces are named <literal>qrouter-<replaceable>&lt;router_uuid&gt;</replaceable></literal>, and dhcp-agent name spaces are named <literal>qdhcp-</literal><literal><replaceable>&lt;net_uuid&gt;</replaceable></literal>. This output shows a network node with four networks running dhcp-agents, one of which is also running an l3-agent router. It's important to know which network you need to be working in. A list of existing networks and their UUIDs can be obtained by running <literal>neutron net-list</literal> with administrative credentials."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1258(para)
msgid "Once you've determined which namespace you need to work in, you can use any of the debugging tools mention earlier by prefixing the command with <literal>ip netns exec &lt;namespace&gt;</literal>. For example, to see what network interfaces exist in the first qdhcp namespace returned above, do this:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1278(para)
msgid "From this you see that the DHCP server on that network is using the tape6256f7d-31 device and has an IP address of 10.0.1.100. Seeing the address 169.254.169.254, you can also see that the dhcp-agent is running a metadata-proxy service. Any of the commands mentioned previously in this chapter can be run in the same way. It is also possible to run a shell, such as <literal>bash</literal>, and have an interactive session within the namespace. In the latter case, exiting the shell returns you to the top-level default namespace."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1289(title) ./doc/openstack-ops/ch_ops_projects_users.xml:1094(title) ./doc/openstack-ops/ch_ops_lay_of_land.xml:805(title) ./doc/openstack-ops/ch_ops_backup_recovery.xml:281(title) ./doc/openstack-ops/ch_ops_log_monitor.xml:996(title)
msgid "Summary"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1291(para)
msgid "The authors have spent too much time looking at packet dumps in order to distill this information for you. We trust that, following the methods outlined in this chapter, you will have an easier time! Aside from working with the tools and steps above, don't forget that sometimes an extra pair of eyes goes a long way to assist."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:10(title)
msgid "Tales From the Cryp^H^H^H^H Cloud"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:12(para)
msgid "Herein lies a selection of tales from OpenStack cloud operators. Read, and learn from their wisdom."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:16(title)
msgid "Double VLAN"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:17(para)
msgid "I was on-site in Kelowna, British Columbia, Canada setting up a new OpenStack cloud. The deployment was fully automated: Cobbler deployed the OS on the bare metal, bootstrapped it, and Puppet took over from there. I had run the deployment scenario so many times in practice and took for granted that everything was working."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:23(para)
msgid "On my last day in Kelowna, I was in a conference call from my hotel. In the background, I was fooling around on the new cloud. I launched an instance and logged in. Everything looked fine. Out of boredom, I ran <placeholder-1/> and all of the sudden the instance locked up."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:29(para)
msgid "Thinking it was just a one-off issue, I terminated the instance and launched a new one. By then, the conference call ended and I was off to the data center."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:32(para)
msgid "At the data center, I was finishing up some tasks and remembered the lock-up. I logged into the new instance and ran <placeholder-1/> again. It worked. Phew. I decided to run it one more time. It locked up. WTF."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:36(para)
msgid "After reproducing the problem several times, I came to the unfortunate conclusion that this cloud did indeed have a problem. Even worse, my time was up in Kelowna and I had to return back to Calgary."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:40(para)
msgid "Where do you even begin troubleshooting something like this? An instance just randomly locks when a command is issued. Is it the image? Nopeit happens on all images. Is it the compute node? Nopeall nodes. Is the instance locked up? No! New SSH connections work just fine!"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:45(para)
msgid "We reached out for help. A networking engineer suggested it was an MTU issue. Great! MTU! Something to go on! What's MTU and why would it cause a problem?"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:48(para)
msgid "MTU is maximum transmission unit. It specifies the maximum number of bytes that the interface accepts for each packet. If two interfaces have two different MTUs, bytes might get chopped off and weird things happensuch as random session lockups."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:54(para)
msgid "Not all packets have a size of 1500. Running the <placeholder-1/> command over SSH might only create a single packets less than 1500 bytes. However, running a command with heavy output, such as <placeholder-2/> requires several packets of 1500 bytes."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:60(para)
msgid "OK, so where is the MTU issue coming from? Why haven't we seen this in any other deployment? What's new in this situation? Well, new data center, new uplink, new switches, new model of switches, new servers, first time using this model of servers… so, basically everything was new. Wonderful. We toyed around with raising the MTU at various areas: the switches, the NICs on the compute nodes, the virtual NICs in the instances, we even had the data center raise the MTU for our uplink interface. Some changes worked, some didn't. This line of troubleshooting didn't feel right, though. We shouldn't have to be changing the MTU in these areas."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:72(para)
msgid "As a last resort, our network admin (Alvaro) and myself sat down with four terminal windows, a pencil, and a piece of paper. In one window, we ran ping. In the second window, we ran <placeholder-1/> on the cloud controller. In the third, <placeholder-2/> on the compute node. And the forth had <placeholder-3/> on the instance. For background, this cloud was a multi-node, non-multi-host setup."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:80(para)
msgid "One cloud controller acted as a gateway to all compute nodes. VlanManager was used for the network config. This means that the cloud controller and all compute nodes had a different VLAN for each OpenStack project. We used the -s option of <placeholder-1/> to change the packet size. We watched as sometimes packets would fully return, sometimes they'd only make it out and never back in, and sometimes the packets would stop at a random point. We changed <placeholder-2/> to start displaying the hex dump of the packet. We pinged between every combination of outside, controller, compute, and instance."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:92(para)
msgid "Finally, Alvaro noticed something. When a packet from the outside hits the cloud controller, it should not be configured with a VLAN. We verified this as true. When the packet went from the cloud controller to the compute node, it should only have a VLAN if it was destined for an instance. This was still true. When the ping reply was sent from the instance, it should be in a VLAN. True. When it came back to the cloud controller and on its way out to the public internet, it should no longer have a VLAN. False. Uh oh. It looked as though the VLAN part of the packet was not being removed."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:103(para)
msgid "That made no sense."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:104(para)
msgid "While bouncing this idea around in our heads, I was randomly typing commands on the compute node: <placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:111(para)
msgid "\"Hey Alvaro, can you run a VLAN on top of a VLAN?\""
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:113(para)
msgid "\"If you did, you'd add an extra 4 bytes to the packet\""
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:115(para)
msgid "Then it all made sense… <placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:119(para)
msgid "In <filename>nova.conf</filename>, <code>vlan_interface</code> specifies what interface OpenStack should attach all VLANs to. The correct setting should have been: <placeholder-1/>."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:123(para)
msgid "As this would be the server's bonded NIC."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:124(para)
msgid "vlan20 is the VLAN that the data center gave us for outgoing public internet access. It's a correct VLAN and is also attached to bond0."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:127(para)
msgid "By mistake, I configured OpenStack to attach all tenant VLANs to vlan20 instead of bond0 thereby stacking one VLAN on top of another which then added an extra 4 bytes to each packet which cause a packet of 1504 bytes to be sent out which would cause problems when it arrived at an interface that only accepted 1500!"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:133(para)
msgid "As soon as this setting was fixed, everything worked."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:137(title)
msgid "\"The Issue\""
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:138(para)
msgid "At the end of August 2012, a post-secondary school in Alberta, Canada migrated its infrastructure to an OpenStack cloud. As luck would have it, within the first day or two of it running, one of their servers just disappeared from the network. Blip. Gone."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:143(para)
msgid "After restarting the instance, everything was back up and running. We reviewed the logs and saw that at some point, network communication stopped and then everything went idle. We chalked this up to a random occurrence."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:148(para)
msgid "A few nights later, it happened again."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:149(para)
msgid "We reviewed both sets of logs. The one thing that stood out the most was DHCP. At the time, OpenStack, by default, set DHCP leases for one minute (it's now two minutes). This means that every instance contacts the cloud controller (DHCP server) to renew its fixed IP. For some reason, this instance could not renew its IP. We correlated the instance's logs with the logs on the cloud controller and put together a conversation:"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:160(para)
msgid "Instance tries to renew IP."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:163(para)
msgid "Cloud controller receives the renewal request and sends a response."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:167(para)
msgid "Instance \"ignores\" the response and re-sends the renewal request."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:171(para)
msgid "Cloud controller receives the second request and sends a new response."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:175(para)
msgid "Instance begins sending a renewal request to <code>255.255.255.255</code> since it hasn't heard back from the cloud controller."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:180(para)
msgid "The cloud controller receives the <code>255.255.255.255</code> request and sends a third response."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:185(para)
msgid "The instance finally gives up."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:188(para)
msgid "With this information in hand, we were sure that the problem had to do with DHCP. We thought that for some reason, the instance wasn't getting a new IP address and with no IP, it shut itself off from the network."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:192(para)
msgid "A quick Google search turned up this: <link href=\"https://lists.launchpad.net/openstack/msg11696.html\">DHCP lease errors in VLAN mode</link> (https://lists.launchpad.net/openstack/msg11696.html) which further supported our DHCP theory."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:197(para)
msgid "An initial idea was to just increase the lease time. If the instance only renewed once every week, the chances of this problem happening would be tremendously smaller than every minute. This didn't solve the problem, though. It was just covering the problem up."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:202(para)
msgid "We decided to have <placeholder-1/> run on this instance and see if we could catch it in action again. Sure enough, we did."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:205(para)
msgid "The <placeholder-1/> looked very, very weird. In short, it looked as though network communication stopped before the instance tried to renew its IP. Since there is so much DHCP chatter from a one minute lease, it's very hard to confirm it, but even with only milliseconds difference between packets, if one packet arrives first, it arrived first, and if that packet reported network issues, then it had to have happened before DHCP."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:213(para)
msgid "Additionally, this instance in question was responsible for a very, very large backup job each night. While \"The Issue\" (as we were now calling it) didn't happen exactly when the backup happened, it was close enough (a few hours) that we couldn't ignore it."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:218(para)
msgid "Further days go by and we catch The Issue in action more and more. We find that dhclient is not running after The Issue happens. Now we're back to thinking it's a DHCP issue. Running <filename>/etc/init.d/networking</filename> restart brings everything back up and running."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:223(para)
msgid "Ever have one of those days where all of the sudden you get the Google results you were looking for? Well, that's what happened here. I was looking for information on dhclient and why it dies when it can't renew its lease and all of the sudden I found a bunch of OpenStack and dnsmasq discussions that were identical to the problem we were seeing!"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:230(para)
msgid "<link href=\"http://www.gossamer-threads.com/lists/openstack/operators/18197\">Problem with Heavy Network IO and Dnsmasq</link> (http://www.gossamer-threads.com/lists/openstack/operators/18197)"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:236(para)
msgid "<link href=\"http://www.gossamer-threads.com/lists/openstack/dev/14696\">instances losing IP address while running, due to No DHCPOFFER</link> (http://www.gossamer-threads.com/lists/openstack/dev/14696)"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:242(para)
msgid "Seriously, Google."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:243(para)
msgid "This bug report was the key to everything: <link href=\"https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/997978\"> KVM images lose connectivity with bridged network</link> (https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/997978)"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:249(para)
msgid "It was funny to read the report. It was full of people who had some strange network problem but didn't quite explain it in the same way."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:252(para)
msgid "So it was a qemu/kvm bug."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:253(para)
msgid "At the same time of finding the bug report, a co-worker was able to successfully reproduce The Issue! How? He used <placeholder-1/> to spew a ton of bandwidth at an instance. Within 30 minutes, the instance just disappeared from the network."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:258(para)
msgid "Armed with a patched qemu and a way to reproduce, we set out to see if we've finally solved The Issue. After 48 hours straight of hammering the instance with bandwidth, we were confident. The rest is history. You can search the bug report for \"joe\" to find my comments and actual tests."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:266(title)
msgid "Disappearing Images"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:267(para)
msgid "At the end of 2012, Cybera (a nonprofit with a mandate to oversee the development of cyberinfrastructure in Alberta, Canada) deployed an updated OpenStack cloud for their <link title=\"DAIR project\" href=\"http://www.canarie.ca/en/dair-program/about\">DAIR project</link> (http://www.canarie.ca/en/dair-program/about). A few days into production, a compute node locks up. Upon rebooting the node, I checked to see what instances were hosted on that node so I could boot them on behalf of the customer. Luckily, only one instance."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:278(para)
msgid "The <placeholder-1/> command wasn't working, so I used <placeholder-2/>, but it immediately came back with an error saying it was unable to find the backing disk. In this case, the backing disk is the Glance image that is copied to <filename>/var/lib/nova/instances/_base</filename> when the image is used for the first time. Why couldn't it find it? I checked the directory and sure enough it was gone."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:287(para)
msgid "I reviewed the <code>nova</code> database and saw the instance's entry in the <code>nova.instances</code> table. The image that the instance was using matched what virsh was reporting, so no inconsistency there."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:291(para)
msgid "I checked Glance and noticed that this image was a snapshot that the user created. At least that was good newsthis user would have been the only user affected."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:295(para)
msgid "Finally, I checked StackTach and reviewed the user's events. They had created and deleted several snapshotsmost likely experimenting. Although the timestamps didn't match up, my conclusion was that they launched their instance and then deleted the snapshot and it was somehow removed from /var/lib/nova/instances/_base. None of that made sense, but it was the best I could come up with."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:302(para)
msgid "It turns out the reason that this compute node locked up was a hardware issue. We removed it from the DAIR cloud and called Dell to have it serviced. Dell arrived and began working. Somehow or another (or a fat finger), a different compute node was bumped and rebooted. Great."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:308(para)
msgid "When this node fully booted, I ran through the same scenario of seeing what instances were running so I could turn them back on. There were a total of four. Three booted and one gave an error. It was the same error as before: unable to find the backing disk. Seriously, what?"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:314(para)
msgid "Again, it turns out that the image was a snapshot. The three other instances that successfully started were standard cloud images. Was it a problem with snapshots? That didn't make sense."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:318(para)
msgid "A note about DAIR's architecture: <filename>/var/lib/nova/instances</filename> is a shared NFS mount. This means that all compute nodes have access to it, which includes the <code>_base</code> directory. Another centralized area is <filename>/var/log/rsyslog</filename> on the cloud controller. This directory collects all OpenStack logs from all compute nodes. I wondered if there were any entries for the file that <placeholder-1/> is reporting: <placeholder-2/>"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:333(para)
msgid "Ah-hah! So OpenStack was deleting it. But why?"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:334(para)
msgid "A feature was introduced in Essex to periodically check and see if there were any <code>_base</code> files not in use. If there were, Nova would delete them. This idea sounds innocent enough and has some good qualities to it. But how did this feature end up turned on? It was disabled by default in Essex. As it should be. It was <link href=\"https://bugs.launchpad.net/nova/+bug/1029674\">decided to be turned on in Folsom</link> (https://bugs.launchpad.net/nova/+bug/1029674). I cannot emphasize enough that:"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:346(emphasis)
msgid "Actions which delete things should not be enabled by default."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:349(para)
msgid "Disk space is cheap these days. Data recovery is not."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:351(para)
msgid "Secondly, DAIR's shared <filename>/var/lib/nova/instances</filename> directory contributed to the problem. Since all compute nodes have access to this directory, all compute nodes periodically review the _base directory. If there is only one instance using an image, and the node that the instance is on is down for a few minutes, it won't be able to mark the image as still in use. Therefore, the image seems like it's not in use and is deleted. When the compute node comes back online, the instance hosted on that node is unable to start."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:364(title)
msgid "The Valentine's Day Compute Node Massacre"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:365(para)
msgid "Although the title of this story is much more dramatic than the actual event, I don't think, or hope, that I'll have the opportunity to use \"Valentine's Day Massacre\" again in a title."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:369(para)
msgid "This past Valentine's Day, I received an alert that a compute node was no longer available in the cloudmeaning, showed this particular node with a status of XXX."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:374(para)
msgid "I logged into the cloud controller and was able to both <placeholder-1/> and SSH into the problematic compute node which seemed very odd. Usually if I receive this type of alert, the compute node has totally locked up and would be inaccessible."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:379(para)
msgid "After a few minutes of troubleshooting, I saw the following details:"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:383(para)
msgid "A user recently tried launching a CentOS instance on that node"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:387(para)
msgid "This user was the only user on the node (new node)"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:391(para)
msgid "The load shot up to 8 right before I received the alert"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:395(para)
msgid "The bonded 10gb network device (bond0) was in a DOWN state"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:399(para)
msgid "The 1gb NIC was still alive and active"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:402(para)
msgid "I looked at the status of both NICs in the bonded pair and saw that neither was able to communicate with the switch port. Seeing as how each NIC in the bond is connected to a separate switch, I thought that the chance of a switch port dying on each switch at the same time was quite improbable. I concluded that the 10gb dual port NIC had died and needed replaced. I created a ticket for the hardware support department at the data center where the node was hosted. I felt lucky that this was a new node and no one else was hosted on it yet."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:412(para)
msgid "An hour later I received the same alert, but for another compute node. Crap. OK, now there's definitely a problem going on. Just like the original node, I was able to log in by SSH. The bond0 NIC was DOWN but the 1gb NIC was active."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:417(para)
msgid "And the best part: the same user had just tried creating a CentOS instance. What?"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:419(para)
msgid "I was totally confused at this point, so I texted our network admin to see if he was available to help. He logged in to both switches and immediately saw the problem: the switches detected spanning tree packets coming from the two compute nodes and immediately shut the ports down to prevent spanning tree loops: <placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:432(para)
msgid "He re-enabled the switch ports and the two compute nodes immediately came back to life."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:434(para)
msgid "Unfortunately, this story has an open ending... we're still looking into why the CentOS image was sending out spanning tree packets. Further, we're researching a proper way on how to mitigate this from happening. It's a bigger issue than one might think. While it's extremely important for switches to prevent spanning tree loops, it's very problematic to have an entire compute node be cut from the network when this happens. If a compute node is hosting 100 instances and one of them sends a spanning tree packet, that instance has effectively DDOS'd the other 99 instances."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:445(para)
msgid "This is an ongoing and hot topic in networking circles especially with the raise of virtualization and virtual switches."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:450(title)
msgid "Down the Rabbit Hole"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:451(para)
msgid "Users being able to retrieve console logs from running instances is a boon for supportmany times they can figure out what's going on inside their instance and fix what's going on without bothering you. Unfortunately, sometimes overzealous logging of failures can cause problems of its own."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:457(para)
msgid "A report came in: VMs were launching slowly, or not at all. Cue the standard checksnothing on the nagios, but there was a spike in network towards the current master of our RabbitMQ cluster. Investigation started, but soon the other parts of the queue cluster were leaking memory like a sieve. Then the alert came inthe master rabbit server went down. Connections failed over to the slave."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:464(para)
msgid "At that time, our control services were hosted by another team and we didn't have much debugging information to determine what was going on with the master, and couldn't reboot it. That team noted that it failed without alert, but managed to reboot it. After an hour, the cluster had returned to its normal state and we went home for the day."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:471(para)
msgid "Continuing the diagnosis the next morning was kick started by another identical failure. We quickly got the message queue running again, and tried to work out why Rabbit was suffering from so much network traffic. Enabling debug logging on <systemitem class=\"service\">nova-api</systemitem> quickly brought understanding. A <placeholder-1/> was scrolling by faster than we'd ever seen before. CTRL+C on that and we could plainly see the contents of a system log spewing failures over and over again - a system log from one of our users' instances."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:483(para)
msgid "After finding the instance ID we headed over to <filename>/var/lib/nova/instances</filename> to find the <filename>console.log</filename>: <placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:491(para)
msgid "Sure enough, the user had been periodically refreshing the console log page on the dashboard and the 5G file was traversing the rabbit cluster to get to the dashboard."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:495(para)
msgid "We called them and asked them to stop for a while, and they were happy to abandon the horribly broken VM. After that, we started monitoring the size of console logs."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:499(para)
msgid "To this day, <link href=\"https://bugs.launchpad.net/nova/+bug/832507\">the issue</link> (https://bugs.launchpad.net/nova/+bug/832507) doesn't have a permanent resolution, but we look forward to the discussion at the next summit."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:507(title)
msgid "Havana Haunted by the Dead"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:508(para)
msgid "Felix Lee of Academia Sinica Grid Computing Centre in Taiwan contributed this story."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:510(para)
msgid "I just upgraded OpenStack from Grizzly to Havana 2013.2-2 using the RDO repository and everything was running pretty wellexcept the EC2 API."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:513(para)
msgid "I noticed that the API would suffer from a heavy load and respond slowly to particular EC2 requests such as <literal>RunInstances</literal>."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:516(para)
msgid "Output from <filename>/var/log/nova/nova-api.log</filename> on Havana:"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:525(para)
msgid "This request took over two minutes to process, but executed quickly on another co-existing Grizzly deployment using the same hardware and system configuration."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:528(para)
msgid "Output from <filename>/var/log/nova/nova-api.log</filename> on Grizzly:"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:537(para)
msgid "While monitoring system resources, I noticed a significant increase in memory consumption while the EC2 API processed this request. I thought it wasn't handling memory properlypossibly not releasing memory. If the API received several of these requests, memory consumption quickly grew until the system ran out of RAM and began using swap. Each node has 48 GB of RAM and the \"nova-api\" process would consume all of it within minutes. Once this happened, the entire system would become unusably slow until I restarted the nova-api service."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:547(para)
msgid "So, I found myself wondering what changed in the EC2 API on Havana that might cause this to happen. Was it a bug or a normal behavior that I now need to work around?"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:550(para)
msgid "After digging into the Nova code, I noticed two areas in <filename>api/ec2/cloud.py</filename> potentially impacting my system:"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:562(para)
msgid "Since my database contained many recordsover 1 million metadata records and over 300,000 instance records in \"deleted\" or \"errored\" stateseach search took ages. I decided to clean up the database by first archiving a copy for backup and then performing some deletions using the MySQL client. For example, I ran the following SQL command to remove rows of instances deleted for over a year:"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:570(para)
msgid "Performance increased greatly after deleting the old records and my new deployment continues to behave well."
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/section_arch_example-nova.xml:411(None)
msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_01in01.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/section_arch_example-nova.xml:450(None)
msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_01in02.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:12(title)
msgid "Example Architecture—Legacy Networking (nova)"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:14(para)
msgid "This particular example architecture has been upgraded from Grizzly to Havana and tested in production environments where many public IP addresses are available for assignment to multiple instances. You can find a second example architecture that uses OpenStack Networking (neutron) after this section. Each example offers high availability, meaning that if a particular node goes down, another node with the same configuration can take over the tasks so that service continues to be available.<indexterm class=\"singular\"><primary>Havana</primary></indexterm><indexterm class=\"singular\"><primary>Grizzly</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:27(title) ./doc/openstack-ops/section_arch_example-neutron.xml:23(title)
msgid "Overview"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:29(para)
msgid "The simplest architecture you can build upon for Compute has a single cloud controller and multiple compute nodes. The simplest architecture for Object Storage has five nodes: one for identifying users and proxying requests to the API, then four for storage itself to provide enough replication for eventual consistency. This example architecture does not dictate a particular number of nodes, but shows the thinking <phrase role=\"keep-together\">and considerations</phrase> that went into choosing this architecture including the features <phrase role=\"keep-together\">offered</phrase>.<indexterm class=\"singular\"><primary>CentOS</primary></indexterm><indexterm class=\"singular\"><primary>RDO (Red Hat Distributed OpenStack)</primary></indexterm><indexterm class=\"singular\"><primary>Ubuntu</primary></indexterm><indexterm class=\"singular\"><primary>legacy networking (nova)</primary><secondary>component overview</secondary></indexterm><indexterm class=\"singular\"><primary>example architectures</primary><see>legacy networking; OpenStack networking</see></indexterm><indexterm class=\"singular\"><primary>Object Storage</primary><secondary>simplest architecture for</secondary></indexterm><indexterm class=\"singular\"><primary>Compute</primary><secondary>simplest architecture for</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:62(title) ./doc/openstack-ops/section_arch_example-neutron.xml:38(title)
msgid "Components"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:71(th) ./doc/openstack-ops/section_arch_example-neutron.xml:47(th)
msgid "Component"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:73(th) ./doc/openstack-ops/section_arch_example-neutron.xml:49(th)
msgid "Details"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:79(para) ./doc/openstack-ops/section_arch_example-neutron.xml:55(para)
msgid "OpenStack release"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:85(para) ./doc/openstack-ops/section_arch_example-neutron.xml:61(para)
msgid "Host operating system"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:87(para)
msgid "Ubuntu 12.04 LTS or Red Hat Enterprise Linux 6.5, including derivatives such as CentOS and Scientific Linux"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:93(para) ./doc/openstack-ops/section_arch_example-neutron.xml:67(para)
msgid "OpenStack package repository"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:95(para)
msgid "<link href=\"https://wiki.ubuntu.com/ServerTeam/CloudArchive\">Ubuntu Cloud Archive</link> or <link href=\"http://openstack.redhat.com/Frequently_Asked_Questions\">RDO</link>*"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:101(para) ./doc/openstack-ops/section_arch_example-neutron.xml:74(para)
msgid "Hypervisor"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:103(para) ./doc/openstack-ops/section_arch_example-neutron.xml:76(para) ./doc/openstack-ops/section_arch_example-neutron.xml:166(term) ./doc/openstack-ops/ch_arch_compute_nodes.xml:124(link)
msgid "KVM"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:107(para) ./doc/openstack-ops/section_arch_example-neutron.xml:80(para) ./doc/openstack-ops/ch_arch_cloud_controller.xml:368(title)
msgid "Database"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:109(para)
msgid "MySQL*"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:113(para) ./doc/openstack-ops/section_arch_example-neutron.xml:86(para)
msgid "Message queue"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:115(para)
msgid "RabbitMQ for Ubuntu; Qpid for Red Hat Enterprise Linux and derivatives"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:120(para) ./doc/openstack-ops/section_arch_example-neutron.xml:92(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2375(title) ./doc/openstack-ops/ch_ops_upgrades.xml:2545(title) ./doc/openstack-ops/ch_ops_upgrades.xml:2713(title)
msgid "Networking service"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:122(literal)
msgid "nova-network"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:126(para)
msgid "Network manager"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:128(para) ./doc/openstack-ops/ch_arch_network_design.xml:332(para)
msgid "FlatDHCP"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:132(para)
msgid "Single <literal>nova-network</literal> or multi-host?"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:135(para)
msgid "multi-host*"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:139(para)
msgid "Image service (glance) back end"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:141(para)
msgid "file"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:145(para) ./doc/openstack-ops/section_arch_example-neutron.xml:110(para)
msgid "Identity Service (keystone) driver"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:147(para) ./doc/openstack-ops/section_arch_example-neutron.xml:112(para)
msgid "SQL"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:151(para)
msgid "Block Storage Service (cinder) back end"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:153(para)
msgid "LVM/iSCSI"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:157(para)
msgid "Live Migration back end"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:159(para)
msgid "Shared storage using NFS*"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:163(para) ./doc/openstack-ops/ch_arch_storage.xml:238(th)
msgid "Object storage"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:165(para) ./doc/openstack-ops/ch_arch_storage.xml:282(para) ./doc/openstack-ops/ch_arch_storage.xml:622(term)
msgid "OpenStack Object Storage (swift)"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:170(para)
msgid "An asterisk (*) indicates when the example architecture deviates from the settings of a default installation. We'll offer explanations for those deviations next.<indexterm class=\"singular\"><primary>objects</primary><secondary>object storage</secondary></indexterm><indexterm class=\"singular\"><primary>storage</primary><secondary>object storage</secondary></indexterm><indexterm class=\"singular\"><primary>migration</primary></indexterm><indexterm class=\"singular\"><primary>live migration</primary></indexterm><indexterm class=\"singular\"><primary>IP addresses</primary><secondary>floating</secondary></indexterm><indexterm class=\"singular\"><primary>floating IP address</primary></indexterm><indexterm class=\"singular\"><primary>storage</primary><secondary>block storage</secondary></indexterm><indexterm class=\"singular\"><primary>block storage</primary></indexterm><indexterm class=\"singular\"><primary>dashboard</primary></indexterm><indexterm class=\"singular\"><primary>legacy networking (nova)</primary><secondary>features supported by</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:209(para)
msgid "<glossterm>Dashboard</glossterm>: You probably want to offer a dashboard, but your users may be more interested in API access only."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:215(para)
msgid "<glossterm>Block storage</glossterm>: You don't have to offer users block storage if their use case only needs ephemeral storage on compute nodes, for example."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:221(para)
msgid "<glossterm>Floating IP address</glossterm>: Floating IP addresses are public IP addresses that you allocate from a predefined pool to assign to virtual machines at launch. Floating IP address ensure that the public IP address is available whenever an instance is booted. Not every organization can offer thousands of public floating IP addresses for thousands of instances, so this feature is considered optional."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:232(para)
msgid "<glossterm>Live migration</glossterm>: If you need to move running virtual machine instances from one host to another with little or no service interruption, you would enable live migration, but it is considered optional."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:239(para)
msgid "<glossterm>Object storage</glossterm>: You may choose to store machine images on a file system rather than in object storage if you do not have the extra hardware for the required replication and redundancy that OpenStack Object Storage offers."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:205(para)
msgid "The following features of OpenStack are supported by the example architecture documented in this guide, but are optional:<placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:250(title) ./doc/openstack-ops/section_arch_example-neutron.xml:125(title)
msgid "Rationale"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:252(para)
msgid "This example architecture has been selected based on the current default feature set of OpenStack <glossterm>Havana</glossterm>, with an emphasis on stability. We believe that many clouds that currently run OpenStack in production have made similar choices.<indexterm class=\"singular\"><primary>legacy networking (nova)</primary><secondary>rationale for choice of</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:262(para)
msgid "You must first choose the operating system that runs on all of the physical nodes. While OpenStack is supported on several distributions of Linux, we used <emphasis>Ubuntu 12.04 LTS (Long Term Support)</emphasis>, which is used by the majority of the development community, has feature completeness compared with other distributions and has clear future support plans."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:269(para)
msgid "We recommend that you do not use the default Ubuntu OpenStack install packages and instead use the <link href=\"https://wiki.ubuntu.com/ServerTeam/CloudArchive\">Ubuntu Cloud Archive</link>. The Cloud Archive is a package repository supported by Canonical that allows you to upgrade to future OpenStack releases while remaining on Ubuntu 12.04."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:276(para)
msgid "<emphasis>KVM</emphasis> as a <glossterm>hypervisor</glossterm> complements the choice of Ubuntu—being a matched pair in terms of support, and also because of the significant degree of attention it garners from the OpenStack development community (including the authors, who mostly use KVM). It is also feature complete, free from licensing charges and restrictions.<indexterm class=\"singular\"><primary>kernel-based VM (KVM) hypervisor</primary></indexterm><indexterm class=\"singular\"><primary>hypervisors</primary><secondary>KVM</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:289(para)
msgid "<emphasis>MySQL</emphasis> follows a similar trend. Despite its recent change of ownership, this database is the most tested for use with OpenStack and is heavily documented. We deviate from the default database, <emphasis>SQLite</emphasis>, because SQLite is not an appropriate database for production usage."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:295(para)
msgid "The choice of <emphasis>RabbitMQ</emphasis> over other AMQP compatible options that are gaining support in OpenStack, such as ZeroMQ and Qpid, is due to its ease of use and significant testing in production. It also is the only option that supports features such as Compute cells. We recommend clustering with RabbitMQ, as it is an integral component of the system and fairly simple to implement due to its inbuilt nature.<indexterm class=\"singular\"><primary>Advanced Message Queuing Protocol (AMQP)</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:305(para)
msgid "As discussed in previous chapters, there are several options for networking in OpenStack Compute. We recommend <emphasis>FlatDHCP</emphasis> and to use <emphasis>Multi-Host</emphasis> networking mode for high availability, running one <code>nova-network</code> daemon per OpenStack compute host. This provides a robust mechanism for ensuring network interruptions are isolated to individual compute hosts, and allows for the direct use of hardware network gateways."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:314(para)
msgid "<emphasis>Live Migration</emphasis> is supported by way of shared storage, with <emphasis>NFS</emphasis> as the distributed file system."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:318(para)
msgid "Acknowledging that many small-scale deployments see running Object Storage just for the storage of virtual machine images as too costly, we opted for the file back end in the OpenStack Image service (Glance). If your cloud will include Object Storage, you can easily add it as a back end."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:324(para)
msgid "We chose the <emphasis>SQL back end for the Identity Service (keystone)</emphasis> over others, such as LDAP. This back end is simple to install and is robust. The authors acknowledge that many installations want to bind with existing directory services and caution careful understanding of the <link href=\"http://docs.openstack.org/havana/config-reference/content/ch_configuring-openstack-identity.html#configuring-keystone-for-ldap-backend\" title=\"LDAP config options\">array of options available</link>."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:332(para)
msgid "Block Storage (cinder) is installed natively on external storage nodes and uses the <emphasis>LVM/iSCSI plug-in</emphasis>. Most Block Storage Service plug-ins are tied to particular vendor products and implementations limiting their use to consumers of those hardware platforms, but LVM/iSCSI is robust and stable on commodity hardware."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:341(para)
msgid "While the cloud can be run without the <emphasis>OpenStack Dashboard</emphasis>, we consider it to be indispensable, not just for user interaction with the cloud, but also as a tool for operators. Additionally, the dashboard's use of Django makes it a flexible framework for <phrase role=\"keep-together\">extension</phrase>."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:349(title)
msgid "Why not use the OpenStack Network Service (neutron)?"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:351(para)
msgid "This example architecture does not use the OpenStack Network Service (neutron), because it does not yet support multi-host networking and our organizations (university, government) have access to a large range of publicly-accessible IPv4 addresses.<indexterm class=\"singular\"><primary>legacy networking (nova)</primary><secondary>vs. OpenStack Network Service (neutron)</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:362(title)
msgid "Why use multi-host networking?"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:364(para)
msgid "In a default OpenStack deployment, there is a single <code>nova-network</code> service that runs within the cloud (usually on the cloud controller) that provides services such as network address translation (NAT), DHCP, and DNS to the guest instances. If the single node that runs the <code>nova-network</code> service goes down, you cannot access your instances, and the instances cannot access the Internet. The single node that runs the <literal>nova-network</literal> service can become a bottleneck if excessive network traffic comes in and goes out of the cloud.<indexterm class=\"singular\"><primary>networks</primary><secondary>multi-host</secondary></indexterm><indexterm class=\"singular\"><primary>multi-host networking</primary></indexterm><indexterm class=\"singular\"><primary>legacy networking (nova)</primary><secondary>benefits of multi-host networking</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:385(para)
msgid "<link href=\"http://docs.openstack.org/havana/install-guide/install/apt/content/nova-network.html\">Multi-host</link> is a high-availability option for the network configuration, where the <literal>nova-network</literal> service is run on every compute node instead of running on only a single node."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:394(title) ./doc/openstack-ops/section_arch_example-neutron.xml:232(title)
msgid "Detailed Description"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:396(para)
msgid "The reference architecture consists of multiple compute nodes, a cloud controller, an external NFS storage server for instance storage, and an OpenStack Block Storage server for <glossterm>volume</glossterm> storage.<indexterm class=\"singular\"><primary>legacy networking (nova)</primary><secondary>detailed description</secondary></indexterm> A network time service (Network Time Protocol, or NTP) synchronizes time on all the nodes. FlatDHCPManager in multi-host mode is used for the networking. A logical diagram for this example architecture shows which services are running on each node:"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:416(para)
msgid "The cloud controller runs the dashboard, the API services, the database (MySQL), a message queue server (RabbitMQ), the scheduler for choosing compute resources (<literal>nova-scheduler</literal>), Identity services (keystone, <code>nova-consoleauth</code>), Image services (<code>glance-api</code>, <code>glance-registry</code>), services for console access of guests, and Block Storage services, including the scheduler for storage resources (<code>cinder-api</code> and <code>cinder-scheduler</code>).<indexterm class=\"singular\"><primary>cloud controllers</primary><secondary>duties of</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:429(para)
msgid "Compute nodes are where the computing resources are held, and in our example architecture, they run the hypervisor (KVM), libvirt (the driver for the hypervisor, which enables live migration from node to node), <code>nova-compute</code>, <code>nova-api-metadata</code> (generally only used when running in multi-host mode, it retrieves instance-specific metadata), <code>nova-vncproxy</code>, and <code>nova-network</code>."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:437(para)
msgid "The network consists of two switches, one for the management or private traffic, and one that covers public access, including floating IPs. To support this, the cloud controller and the compute nodes have two network cards. The OpenStack Block Storage and NFS storage servers only need to access the private network and therefore only need one network card, but multiple cards run in a bonded configuration are recommended if possible. Floating IP access is direct to the Internet, whereas Flat IP access goes through a NAT. To envision the network traffic, use this diagram:"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:457(title)
msgid "Optional Extensions"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:459(para)
msgid "You can extend this reference architecture as<indexterm class=\"singular\"><primary>legacy networking (nova)</primary><secondary>optional extensions</secondary></indexterm> follows:"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:468(para)
msgid "Add additional cloud controllers (see <xref linkend=\"maintenance\"/>)."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:473(para)
msgid "Add an OpenStack Storage service (see the Object Storage chapter in the <emphasis>OpenStack Installation Guide</emphasis> for your distribution)."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:479(para)
msgid "Add additional OpenStack Block Storage hosts (see <xref linkend=\"maintenance\"/>)."
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/section_arch_example-neutron.xml:490(None)
msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_0101.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/section_arch_example-neutron.xml:514(None)
msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_0102.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/section_arch_example-neutron.xml:536(None)
msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_0103.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/section_arch_example-neutron.xml:546(None)
msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_0104.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/section_arch_example-neutron.xml:556(None)
msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_0105.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/section_arch_example-neutron.xml:566(None)
msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_0106.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:16(title)
msgid "Example Architecture—OpenStack Networking"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:18(para)
msgid "This chapter provides an example architecture using OpenStack Networking, also known as the Neutron project, in a highly available environment."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:25(para)
msgid "A highly-available environment can be put into place if you require an environment that can scale horizontally, or want your cloud to continue to be operational in case of node failure. This example architecture has been written based on the current default feature set of OpenStack Havana, with an emphasis on high availability.<indexterm class=\"singular\"><primary>RDO (Red Hat Distributed OpenStack)</primary></indexterm><indexterm class=\"singular\"><primary>OpenStack Networking (neutron)</primary><secondary>component overview</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:63(para)
msgid "Red Hat Enterprise Linux 6.5"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:69(link)
msgid "Red Hat Distributed OpenStack (RDO)"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:82(para) ./doc/openstack-ops/section_arch_example-neutron.xml:176(term)
msgid "MySQL"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:88(para) ./doc/openstack-ops/section_arch_example-neutron.xml:188(term)
msgid "Qpid"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:94(para) ./doc/openstack-ops/section_arch_example-neutron.xml:198(term)
msgid "OpenStack Networking"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:98(para)
msgid "Tenant Network Separation"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:100(para) ./doc/openstack-ops/section_arch_example-neutron.xml:208(term)
msgid "VLAN"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:104(para)
msgid "Image service (glance) backend"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:106(para) ./doc/openstack-ops/section_arch_example-neutron.xml:118(para) ./doc/openstack-ops/section_arch_example-neutron.xml:218(term) ./doc/openstack-ops/ch_arch_compute_nodes.xml:476(para)
msgid "GlusterFS"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:116(para)
msgid "Block Storage Service (cinder) backend"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:127(para)
msgid "This example architecture has been selected based on the current default feature set of OpenStack Havana, with an emphasis on high availability. This architecture is currently being deployed in an internal Red Hat OpenStack cloud and used to run hosted and shared services, which by their nature must be highly available.<indexterm class=\"singular\"><primary>OpenStack Networking (neutron)</primary><secondary>rationale for choice of</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:138(para)
msgid "This architecture's components have been selected for the following reasons:"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:143(term)
msgid "Red Hat Enterprise Linux"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:146(para)
msgid "You must choose an operating system that can run on all of the physical nodes. This example architecture is based on Red Hat Enterprise Linux, which offers reliability, long-term support, certified testing, and is hardened. Enterprise customers, now moving into OpenStack usage, typically require these advantages."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:156(term)
msgid "RDO"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:159(para)
msgid "The Red Hat Distributed OpenStack package offers an easy way to download the most current OpenStack release that is built for the Red Hat Enterprise Linux platform."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:169(para)
msgid "KVM is the supported hypervisor of choice for Red Hat Enterprise Linux (and included in distribution). It is feature complete and free from licensing charges and restrictions."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:179(para)
msgid "MySQL is used as the database backend for all databases in the OpenStack environment. MySQL is the supported database of choice for Red Hat Enterprise Linux (and included in distribution); the database is open source, scalable, and handles memory well."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:191(para)
msgid "Apache Qpid offers 100 percent compatibility with the Advanced Message Queuing Protocol Standard, and its broker is available for both C++ and Java."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:201(para)
msgid "OpenStack Networking offers sophisticated networking functionality, including Layer 2 (L2) network segregation and provider networks."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:211(para)
msgid "Using a virtual local area network offers broadcast control, security, and physical layer transparency. If needed, use VXLAN to extend your address space."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:221(para)
msgid "GlusterFS offers scalable storage. As your environment grows, you can continue to add more storage nodes (instead of being restricted, for example, by an expensive storage array)."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:235(title) ./doc/openstack-ops/section_arch_example-neutron.xml:248(caption)
msgid "Node types"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:237(para)
msgid "This section gives you a breakdown of the different nodes that make up the OpenStack environment. A node is a physical machine that is provisioned with an operating system, and running a defined software stack on top of it. <xref linkend=\"node-types-table\"/> provides node descriptions and specifications.<indexterm class=\"singular\"><primary>OpenStack Networking (neutron)</primary><secondary>detailed description of</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:258(th)
msgid "Type"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:260(th) ./doc/openstack-ops/ch_ops_user_facing.xml:273(emphasis) ./doc/openstack-ops/ch_ops_projects_users.xml:241(th) ./doc/openstack-ops/ch_ops_projects_users.xml:613(th)
msgid "Description"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:262(th)
msgid "Example hardware"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:268(td)
msgid "Controller"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:270(para)
msgid "Controller nodes are responsible for running the management software services needed for the OpenStack environment to function. These nodes:"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:276(para)
msgid "Provide the front door that people access as well as the API services that all other components in the environment talk to."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:282(para)
msgid "Run a number of services in a highly available fashion, utilizing Pacemaker and HAProxy to provide a virtual IP and load-balancing functions so all controller nodes are being used."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:289(para)
msgid "Supply highly available \"infrastructure\" services, such as MySQL and Qpid, that underpin all the services."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:295(para)
msgid "Provide what is known as \"persistent storage\" through services run on the host as well. This persistent storage is backed onto the storage nodes for reliability."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:302(para)
msgid "See <xref linkend=\"node_controller-diagram\"/>."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:303(para) ./doc/openstack-ops/section_arch_example-neutron.xml:327(para) ./doc/openstack-ops/section_arch_example-neutron.xml:369(para) ./doc/openstack-ops/section_arch_example-neutron.xml:384(para)
msgid "Model: Dell R620"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:303(para) ./doc/openstack-ops/section_arch_example-neutron.xml:341(para) ./doc/openstack-ops/section_arch_example-neutron.xml:384(para)
msgid "CPU: 2x Intel® Xeon® CPU E5-2620 0 @ 2.00 GHz"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:304(para) ./doc/openstack-ops/section_arch_example-neutron.xml:370(para) ./doc/openstack-ops/section_arch_example-neutron.xml:385(para)
msgid "Memory: 32 GB"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:304(para) ./doc/openstack-ops/section_arch_example-neutron.xml:370(para)
msgid "Disk: two 300 GB 10000 RPM SAS Disks"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:305(para) ./doc/openstack-ops/section_arch_example-neutron.xml:345(para) ./doc/openstack-ops/section_arch_example-neutron.xml:386(para)
msgid "Network: two 10G network ports"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:309(td) ./doc/openstack-ops/ch_ops_backup_recovery.xml:128(title)
msgid "Compute"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:310(para)
msgid "Compute nodes run the virtual machine instances in OpenStack. They:"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:315(para)
msgid "Run the bare minimum of services needed to facilitate these instances."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:319(para)
msgid "Use local storage on the node for the virtual machines so that no VM migration or instance recovery at node failure is possible."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:324(phrase)
msgid "See <xref linkend=\"node_compute-diagram\"/>."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:327(para)
msgid "CPU: 2x Intel® Xeon® CPU E5-2650 0 @ 2.00 GHz"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:328(para)
msgid "Memory: 128 GB"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:329(para)
msgid "Disk: two 600 GB 10000 RPM SAS Disks"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:329(para)
msgid "Network: four 10G network ports (For future proofing expansion)"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:333(td)
msgid "Storage"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:334(para)
msgid "Storage nodes store all the data required for the environment, including disk images in the Image service library, and the persistent storage volumes created by the Block Storage service. Storage nodes use GlusterFS technology to keep the data highly available and scalable."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:339(para)
msgid "See <xref linkend=\"node_storage-diagram\"/>."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:341(para)
msgid "Model: Dell R720xd"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:342(para)
msgid "Memory: 64 GB"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:343(para)
msgid "Disk: two 500 GB 7200 RPM SAS Disks and twenty-four 600 GB 10000 RPM SAS Disks"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:344(para)
msgid "Raid Controller: PERC H710P Integrated RAID Controller, 1 GB NV Cache"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:349(td)
msgid "Network"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:350(para)
msgid "Network nodes are responsible for doing all the virtual networking needed for people to create public or private networks and uplink their virtual machines into external networks. Network nodes:"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:357(para)
msgid "Form the only ingress and egress point for instances running on top of OpenStack."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:361(para)
msgid "Run all of the environment's networking services, with the exception of the networking API service (which runs on the controller node)."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:367(para)
msgid "See <xref linkend=\"node_network-diagram\"/>."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:369(para)
msgid "CPU: 1x Intel® Xeon® CPU E5-2620 0 @ 2.00 GHz"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:371(para)
msgid "Network: five 10G network ports"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:375(td)
msgid "Utility"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:376(para)
msgid "Utility nodes are used by internal administration staff only to provide a number of basic system administration functions needed to get the environment up and running and to maintain the hardware, OS, and software on which it runs."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:379(para)
msgid "These nodes run services such as provisioning, configuration management, monitoring, or GlusterFS management software. They are not required to scale, although these machines are usually backed up."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:385(para)
msgid "Disk: two 500 GB 7200 RPM SAS Disks"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:394(title)
msgid "Networking layout"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:396(para)
msgid "The network contains all the management devices for all hardware in the environment (for example, by including Dell iDrac7 devices for the hardware nodes, and management interfaces for network switches). The network is accessed by internal staff only when diagnosing or recovering a hardware issue."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:403(title)
msgid "OpenStack internal network"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:405(para)
msgid "This network is used for OpenStack management functions and traffic, including services needed for the provisioning of physical nodes (<literal>pxe</literal>, <literal>tftp</literal>, <literal>kickstart</literal>), traffic between various OpenStack node types using OpenStack APIs and messages (for example, <literal>nova-compute</literal> talking to <literal>keystone</literal> or <literal>cinder-volume</literal> talking to <literal>nova-api</literal>), and all traffic for storage data to the storage layer underneath by the Gluster protocol. All physical nodes have at least one network interface (typically <literal>eth0</literal>) in this network. This network is only accessible from other VLANs on port 22 (for <literal>ssh</literal> access to manage machines)."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:423(title)
msgid "Public Network"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:427(para)
msgid "IP addresses for public-facing interfaces on the controller nodes (which end users will access the OpenStack services)"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:433(para)
msgid "A range of publicly routable, IPv4 network addresses to be used by OpenStack Networking for floating IPs. You may be restricted in your access to IPv4 addresses; a large range of IPv4 addresses is not necessary."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:440(para)
msgid "Routers for private networks created within OpenStack."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:425(para)
msgid "This network is a combination of: <placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:445(para)
msgid "This network is connected to the controller nodes so users can access the OpenStack interfaces, and connected to the network nodes to provide VMs with publicly routable traffic functionality. The network is also connected to the utility machines so that any utility services that need to be made public (such as system monitoring) can be accessed."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:454(title)
msgid "VM traffic network"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:456(para)
msgid "This is a closed network that is not publicly routable and is simply used as a private, internal network for traffic between virtual machines in OpenStack, and between the virtual machines and the network nodes that provide l3 routes out to the public network (and floating IPs for connections back in to the VMs). Because this is a closed network, we are using a different address space to the others to clearly define the separation. Only Compute and OpenStack Networking nodes need to be connected to this network."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:468(title)
msgid "Node connectivity"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:470(para)
msgid "The following section details how the nodes are connected to the different networks (see <xref linkend=\"networking_layout\"/>) and what other considerations need to take place (for example, bonding) when connecting nodes to the networks."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:476(title)
msgid "Initial deployment"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:478(para)
msgid "Initially, the connection setup should revolve around keeping the connectivity simple and straightforward in order to minimize deployment complexity and time to deploy. The deployment shown in <xref linkend=\"fig1-1\"/> aims to have 1 10G connectivity available to all compute nodes, while still leveraging bonding on appropriate nodes for maximum performance."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:486(title)
msgid "Basic node deployment"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:497(title)
msgid "Connectivity for maximum performance"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:499(para)
msgid "If the networking performance of the basic layout is not enough, you can move to <xref linkend=\"fig1-2\"/>, which provides 2 10G network links to all instances in the environment as well as providing more network bandwidth to the storage layer. bandwidth obtaining maximum performance"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:510(title)
msgid "Performance node deployment"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:522(title)
msgid "Node diagrams"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:524(para)
msgid "The following diagrams (<xref linkend=\"node_controller-diagram\"/> through <xref linkend=\"node_storage-diagram\"/>) include logical information about the different types of nodes, indicating what services will be running on top of them and how they interact with each other. The diagrams also illustrate how the availability and scalability of services are achieved."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:532(title)
msgid "Controller node"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:542(title)
msgid "Compute node"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:552(title)
msgid "Network node"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:562(title)
msgid "Storage node"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:574(title)
msgid "Example Component Configuration"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:695(para)
msgid "Because Pacemaker is cluster software, the software itself handles its own availability, leveraging <literal>corosync</literal> and <literal>cman</literal> underneath."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:702(para)
msgid "If you use the GlusterFS native client, no virtual IP is needed, since the client knows all about nodes after initial connection and automatically routes around failures on the client side."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:709(para)
msgid "If you use the NFS or SMB adaptor, you will need a virtual IP on which to mount the GlusterFS volumes."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:691(para)
msgid "Pacemaker is the clustering software used to ensure the availability of services running on the controller and network nodes: <placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:825(para)
msgid "Configured to use Qpid, <phrase role=\"keep-together\"><literal>qpid_heartbeat = </literal><phrase role=\"keep-together\"><literal>10</literal>,</phrase></phrase><phrase role=\"keep-together\"> configured to use</phrase> Memcached for caching, configured to use <phrase role=\"keep-together\"><literal>libvirt</literal>,</phrase> configured to use <phrase role=\"keep-together\"><literal>neutron</literal>.</phrase>"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:833(para)
msgid "Configured <literal>nova-consoleauth</literal> to use Memcached for session management (so that it can have multiple copies and run in a load balancer)."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:837(para)
msgid "The nova API, scheduler, objectstore, cert, consoleauth, conductor, and vncproxy services are run on all controller nodes, ensuring at least one instance will be available in case of node failure. Compute is also behind HAProxy, which detects when the software fails and routes requests around the failing instance."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:842(para)
msgid "Compute's compute and conductor services, which run on the compute nodes, are only needed to run services on that node, so availability of those services is coupled tightly to the nodes that are available. As long as a compute node is up, it will have the needed services running on top of it."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:895(para)
msgid "The OpenStack Networking service is run on all controller nodes, ensuring at least one instance will be available in case of node failure. It also sits behind HAProxy, which detects if the software fails and routes requests around the failing instance."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:899(para)
msgid "OpenStack Networking's <literal>ovs-agent</literal>, <literal>l3-agent</literal>, <literal>dhcp-agent</literal>, and <literal>metadata-agent</literal> services run on the network nodes, as <literal>lsb</literal> resources inside of Pacemaker. This means that in the case of network node failure, services are kept running on another node. Finally, the <literal>ovs-agent</literal> service is also run on all compute nodes, and in case of compute node failure, the other nodes will continue to function using the copy of the service running on them."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:576(para)
msgid "<xref linkend=\"thirdparty-table\"/> and <xref linkend=\"openstack-config-table\"/> include example configuration and considerations for both third-party and OpenStack<indexterm class=\"singular\"><primary>OpenStack Networking (neutron)</primary><secondary>third-party component configuration</secondary></indexterm> components: <table rules=\"all\" xml:id=\"thirdparty-table\"><caption>Third-party component configuration</caption><col width=\"10%\"/><col width=\"30%\"/><col width=\"30%\"/><col width=\"30%\"/><thead><tr><th>Component</th><th>Tuning</th><th>Availability</th><th>Scalability</th></tr></thead><tbody><tr><td>MySQL</td><td><literal>binlog-format = row</literal></td><td>Master/master replication. However, both nodes are not used at the same time. Replication keeps all nodes as close to being up to date as possible (although the asynchronous nature of the replication means a fully consistent state is not possible). Connections to the database only happen through a Pacemaker virtual IP, ensuring that most problems that occur with master-master replication can be avoided.</td><td>Not heavily considered. Once load on the MySQL server increases enough that scalability needs to be considered, multiple masters or a master/slave setup can be used.</td></tr><tr><td>Qpid</td><td><literal>max-connections=1000</literal><literal>worker-threads=20</literal><literal>connection-backlog=10</literal>, sasl security enabled with SASL-BASIC authentication</td><td>Qpid is added as a resource to the Pacemaker software that runs on Controller nodes where Qpid is situated. This ensures only one Qpid instance is running at one time, and the node with the Pacemaker virtual IP will always be the node running Qpid.</td><td>Not heavily considered. However, Qpid can be changed to run on all controller nodes for scalability and availability purposes, and removed from Pacemaker.</td></tr><tr><td>HAProxy</td><td><literal>maxconn 3000</literal></td><td>HAProxy is a software layer-7 load balancer used to front door all clustered OpenStack API components and do SSL termination. HAProxy can be added as a resource to the Pacemaker software that runs on the Controller nodes where HAProxy is situated. This ensures that only one HAProxy instance is running at one time, and the node with the Pacemaker virtual IP will always be the node running HAProxy.</td><td>Not considered. HAProxy has small enough performance overheads that a single instance should scale enough for this level of workload. If extra scalability is needed, <literal>keepalived</literal> or other Layer-4 load balancing can be introduced to be placed in front of multiple copies of HAProxy.</td></tr><tr><td>Memcached</td><td><literal>MAXCONN=\"8192\" CACHESIZE=\"30457\"</literal></td><td>Memcached is a fast in-memory key-value cache software that is used by OpenStack components for caching data and increasing performance. Memcached runs on all controller nodes, ensuring that should one go down, another instance of Memcached is available.</td><td>Not considered. A single instance of Memcached should be able to scale to the desired workloads. If scalability is desired, HAProxy can be placed in front of Memcached (in raw <literal>tcp</literal> mode) to utilize multiple Memcached instances for scalability. However, this might cause cache consistency issues.</td></tr><tr><td>Pacemaker</td><td>Configured to use <phrase role=\"keep-together\"><literal>corosync</literal> and</phrase><literal>cman</literal> as a cluster communication stack/quorum manager, and as a two-node cluster.</td><placeholder-1/><td>If more nodes need to be made cluster aware, Pacemaker can scale to 64 nodes.</td></tr><tr><td>GlusterFS</td><td><literal>glusterfs</literal> performance profile \"virt\" enabled on all volumes. Volumes are setup in two-node replication.</td><td>Glusterfs is a clustered file system that is run on the storage nodes to provide persistent scalable data storage in the environment. Because all connections to gluster use the <literal>gluster</literal> native mount points, the <literal>gluster</literal> instances themselves provide availability and failover functionality.</td><td>The scalability of GlusterFS storage can be achieved by adding in more storage volumes.</td></tr></tbody></table><table rules=\"all\" xml:id=\"openstack-config-table\"><caption>OpenStack component configuration</caption><col width=\"8%\"/><col width=\"8%\"/><col width=\"25%\"/><col width=\"29%\"/><col width=\"30%\"/><thead><tr><th>Component</th><th>Node type</th><th>Tuning</th><th>Availability</th><th>Scalability</th></tr></thead><tbody><tr><td>Dashboard (horizon)</td><td>Controller</td><td>Configured to use Memcached as a session store, <literal>neutron</literal> support is enabled, <literal>can_set_mount_point = False</literal></td><td>The dashboard is run on all controller nodes, ensuring at least one instance will be available in case of node failure. It also sits behind HAProxy, which detects when the software fails and routes requests around the failing instance.</td><td>The dashboard is run on all controller nodes, so scalability can be achieved with additional controller nodes. HAProxy allows scalability for the dashboard as more nodes are added.</td></tr><tr><td>Identity (keystone)</td><td>Controller</td><td>Configured to use Memcached for caching and PKI for tokens.</td><td>Identity is run on all controller nodes, ensuring at least one instance will be available in case of node failure. Identity also sits behind HAProxy, which detects when the software fails and routes requests around the failing instance.</td><td>Identity is run on all controller nodes, so scalability can be achieved with additional controller nodes. HAProxy allows scalability for Identity as more nodes are added.</td></tr><tr><td>Image service (glance)</td><td>Controller</td><td><literal>/var/lib/glance/images</literal> is a GlusterFS native mount to a Gluster volume off the storage layer.</td><td>The Image service is run on all controller nodes, ensuring at least one instance will be available in case of node failure. It also sits behind HAProxy, which detects when the software fails and routes requests around the failing instance.</td><td>The Image service is run on all controller nodes, so scalability can be achieved with additional controller nodes. HAProxy allows scalability for the Image service as more nodes are added.</td></tr><tr><td>Compute (nova)</td><td>Controller, Compute</td><placeholder-2/><placeholder-3/><td>The nova API, scheduler, objectstore, cert, consoleauth, conductor, and vncproxy services are run on all controller nodes, so scalability can be achieved with additional controller nodes. HAProxy allows scalability for Compute as more nodes are added. The scalability of services running on the compute nodes (compute, conductor) is achieved linearly by adding in more compute nodes.</td></tr><tr><td>Block Storage (cinder)</td><td>Controller</td><td>Configured to use Qpid, <phrase role=\"keep-together\"><literal>qpid_heartbeat = </literal><phrase role=\"keep-together\"><literal>10</literal>,</phrase></phrase><phrase role=\"keep-together\"> configured to use a Gluster</phrase> volume from the storage layer as the backend for Block Storage, using the Gluster native client.</td><td>Block Storage API, scheduler, and volume services are run on all controller nodes, ensuring at least one instance will be available in case of node failure. Block Storage also sits behind HAProxy, which detects if the software fails and routes requests around the failing instance.</td><td>Block Storage API, scheduler and volume services are run on all controller nodes, so scalability can be achieved with additional controller nodes. HAProxy allows scalability for Block Storage as more nodes are added.</td></tr><tr><td>OpenStack Networking (neutron)</td><td>Controller, Compute, Network</td><td>Configured to use QPID, <phrase role=\"keep-together\"><literal>qpid_heartbeat = 10</literal></phrase>, kernel namespace support enabled, <literal>tenant_network_type = vlan</literal>, <literal>allow_overlapping_ips = true</literal>, <literal>tenant_network_type = vlan</literal>, <literal>bridge_uplinks = br-ex:em2</literal>, <literal>bridge_mappings = physnet1:br-ex</literal></td><placeholder-4/><td>The OpenStack Networking server service is run on all controller nodes, so scalability can be achieved with additional controller nodes. HAProxy allows scalability for OpenStack Networking as more nodes are added. Scalability of services running on the network nodes is not currently supported by OpenStack Networking, so they are not be considered. One copy of the services should be sufficient to handle the workload. Scalability of the <literal>ovs-agent</literal> running on compute nodes is achieved by adding in more compute nodes as necessary.</td></tr></tbody></table>"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:10(title)
msgid "Use Cases"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:12(para)
msgid "This appendix contains a small selection of use cases from the community, with more technical detail than usual. Further examples can be found on the <link href=\"https://www.openstack.org/user-stories/\" title=\"OpenStack User Stories Website\">OpenStack website</link>."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:19(title)
msgid "NeCTAR"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:21(para)
msgid "Who uses it: researchers from the Australian publicly funded research sector. Use is across a wide variety of disciplines, with the purpose of instances ranging from running simple web servers to using hundreds of cores for high-throughput computing.<indexterm class=\"singular\"><primary>NeCTAR Research Cloud</primary></indexterm><indexterm class=\"singular\"><primary>use cases</primary><secondary>NeCTAR</secondary></indexterm><indexterm class=\"singular\"><primary>OpenStack community</primary><secondary>use cases</secondary><tertiary>NeCTAR</tertiary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:40(title) ./doc/openstack-ops/app_usecases.xml:110(title) ./doc/openstack-ops/app_usecases.xml:202(title) ./doc/openstack-ops/app_usecases.xml:258(title)
msgid "Deployment"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:42(para)
msgid "Using OpenStack Compute cells, the NeCTAR Research Cloud spans eight sites with approximately 4,000 cores per site."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:45(para)
msgid "Each site runs a different configuration, as a resource <glossterm>cell</glossterm>s in an OpenStack Compute cells setup. Some sites span multiple data centers, some use off compute node storage with a shared file system, and some use on compute node storage with a non-shared file system. Each site deploys the Image service with an Object Storage back end. A central Identity Service, dashboard, and Compute API service are used. A login to the dashboard triggers a SAML login with Shibboleth, which creates an <glossterm>account</glossterm> in the Identity Service with a SQL back end. An Object Storage Global Cluster is used across several sites."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:56(para)
msgid "Compute nodes have 24 to 48 cores, with at least 4 GB of RAM per core and approximately 40 GB of ephemeral storage per core."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:59(para)
msgid "All sites are based on Ubuntu 14.04, with KVM as the hypervisor. The OpenStack version in use is typically the current stable version, with 5 to 10 percent back-ported code from trunk and modifications."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:66(title) ./doc/openstack-ops/app_usecases.xml:166(title) ./doc/openstack-ops/app_usecases.xml:227(title) ./doc/openstack-ops/app_usecases.xml:280(title) ./doc/openstack-ops/ch_ops_resources.xml:11(title)
msgid "Resources"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:70(link)
msgid "OpenStack.org case study"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:75(link)
msgid "NeCTAR-RC GitHub"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:80(link)
msgid "NeCTAR website"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:88(title)
msgid "MIT CSAIL"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:90(para)
msgid "Who uses it: researchers from the MIT Computer Science and Artificial Intelligence Lab.<indexterm class=\"singular\"><primary>CSAIL (Computer Science and Artificial Intelligence Lab)</primary></indexterm><indexterm class=\"singular\"><primary>MIT CSAIL (Computer Science and Artificial Intelligence Lab)</primary></indexterm><indexterm class=\"singular\"><primary>use cases</primary><secondary>MIT CSAIL</secondary></indexterm><indexterm class=\"singular\"><primary>OpenStack community</primary><secondary>use cases</secondary><tertiary>MIT CSAIL</tertiary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:112(para)
msgid "The CSAIL cloud is currently 64 physical nodes with a total of 768 physical cores and 3,456 GB of RAM. Persistent data storage is largely outside the cloud on NFS, with cloud resources focused on compute resources. There are more than 130 users in more than 40 projects, typically running 2,0002,500 vCPUs in 300 to 400 instances."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:118(para)
msgid "We initially deployed on Ubuntu 12.04 with the Essex release of OpenStack using FlatDHCP multi-host networking."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:121(para)
msgid "The software stack is still Ubuntu 12.04 LTS, but now with OpenStack Havana from the Ubuntu Cloud Archive. KVM is the hypervisor, deployed using <link href=\"http://fai-project.org/\">FAI</link> and Puppet for configuration management. The FAI and Puppet combination is used lab-wide, not only for OpenStack. There is a single cloud controller node, which also acts as network controller, with the remainder of the server hardware dedicated to compute nodes."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:129(para)
msgid "Host aggregates and instance-type extra specs are used to provide two different resource allocation ratios. The default resource allocation ratios we use are 4:1 CPU and 1.5:1 RAM. Compute-intensive workloads use instance types that require non-oversubscribed hosts where <literal>cpu_ratio</literal> and <literal>ram_ratio</literal> are both set to 1.0. Since we have hyper-threading enabled on our compute nodes, this provides one vCPU per CPU thread, or two vCPUs per physical core."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:138(para)
msgid "With our upgrade to Grizzly in August 2013, we moved to OpenStack Networking Service, neutron (quantum at the time). Compute nodes have two-gigabit network interfaces and a separate management card for IPMI management. One network interface is used for node-to-node communications. The other is used as a trunk port for OpenStack managed VLANs. The controller node uses two bonded 10g network interfaces for its public IP communications. Big pipes are used here because images are served over this port, and it is also used to connect to iSCSI storage, back-ending the image storage and database. The controller node also has a gigabit interface that is used in trunk mode for OpenStack managed VLAN traffic. This port handles traffic to the dhcp-agent and metadata-proxy."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:151(para)
msgid "We approximate the older <literal>nova-network</literal> multi-host HA setup by using \"provider VLAN networks\" that connect instances directly to existing publicly addressable networks and use existing physical routers as their default gateway. This means that if our network controller goes down, running instances still have their network available, and no single Linux host becomes a traffic bottleneck. We are able to do this because we have a sufficient supply of IPv4 addresses to cover all of our instances and thus don't need NAT and don't use floating IP addresses. We provide a single generic public network to all projects and additional existing VLANs on a project-by-project basis as needed. Individual projects are also allowed to create their own private GRE based networks."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:170(link)
msgid "CSAIL homepage"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:178(title)
msgid "DAIR"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:180(para)
msgid "Who uses it: DAIR is an integrated virtual environment that leverages the CANARIE network to develop and test new information communication technology (ICT) and other digital technologies. It combines such digital infrastructure as advanced networking and cloud computing and storage to create an environment for developing and testing innovative ICT applications, protocols, and services; performing at-scale experimentation for deployment; and facilitating a faster time to market.<indexterm class=\"singular\"><primary>DAIR</primary></indexterm><indexterm class=\"singular\"><primary>use cases</primary><secondary>DAIR</secondary></indexterm><indexterm class=\"singular\"><primary>OpenStack community</primary><secondary>use cases</secondary><tertiary>DAIR</tertiary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:204(para)
msgid "DAIR is hosted at two different data centers across Canada: one in Alberta and the other in Quebec. It consists of a cloud controller at each location, although, one is designated the \"master\" controller that is in charge of central authentication and quotas. This is done through custom scripts and light modifications to OpenStack. DAIR is currently running Havana."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:211(para)
msgid "For Object Storage, each region has a swift environment."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:213(para)
msgid "A NetApp appliance is used in each region for both block storage and instance storage. There are future plans to move the instances off the NetApp appliance and onto a distributed file system such as <glossterm>Ceph</glossterm> or GlusterFS."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:218(para)
msgid "VlanManager is used extensively for network management. All servers have two bonded 10GbE NICs that are connected to two redundant switches. DAIR is set up to use single-node networking where the cloud controller is the gateway for all instances on all compute nodes. Internal OpenStack traffic (for example, storage traffic) does not go through the cloud controller."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:231(link)
msgid "DAIR homepage"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:239(title)
msgid "CERN"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:241(para)
msgid "Who uses it: researchers at CERN (European Organization for Nuclear Research) conducting high-energy physics research.<indexterm class=\"singular\"><primary>CERN (European Organization for Nuclear Research)</primary></indexterm><indexterm class=\"singular\"><primary>use cases</primary><secondary>CERN</secondary></indexterm><indexterm class=\"singular\"><primary>OpenStack community</primary><secondary>use cases</secondary><tertiary>CERN</tertiary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:260(para)
msgid "The environment is largely based on Scientific Linux 6, which is Red Hat compatible. We use KVM as our primary hypervisor, although tests are ongoing with Hyper-V on Windows Server 2008."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:264(para)
msgid "We use the Puppet Labs OpenStack modules to configure Compute, Image service, Identity, and dashboard. Puppet is used widely for instance configuration, and Foreman is used as a GUI for reporting and instance provisioning."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:269(para)
msgid "Users and groups are managed through Active Directory and imported into the Identity Service using LDAP. CLIs are available for nova and Euca2ools to do this."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:273(para)
msgid "There are three clouds currently running at CERN, totaling about 4,700 compute nodes, with approximately 120,000 cores. The CERN IT cloud aims to expand to 300,000 cores by 2015."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:284(link)
msgid "“OpenStack in Production: A tale of 3 OpenStack Clouds”"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:289(link)
msgid "“Review of CERN Data Centre Infrastructure”"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:294(link)
msgid "“CERN Cloud Infrastructure User Guide”"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:12(title)
msgid "Customization"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:14(para)
msgid "OpenStack might not do everything you need it to do out of the box. To add a new feature, you can follow different paths.<indexterm class=\"singular\"><primary>customization</primary><secondary>paths available</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:22(para)
msgid "To take the first path, you can modify the OpenStack code directly. Learn <link href=\"https://wiki.openstack.org/wiki/How_To_Contribute\">how to contribute</link>, follow the <link href=\"https://wiki.openstack.org/wiki/GerritWorkflow\">code review workflow</link>, make your changes, and contribute them back to the upstream OpenStack project. This path is recommended if the feature you need requires deep integration with an existing project. The community is always open to contributions and welcomes new functionality that follows the feature-development guidelines. This path still requires you to use DevStack for testing your feature additions, so this chapter walks you through the DevStack environment.<indexterm class=\"singular\"><primary>OpenStack community</primary><secondary>customization and</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:37(para)
msgid "For the second path, you can write new features and plug them in using changes to a configuration file. If the project where your feature would need to reside uses the Python Paste framework, you can create middleware for it and plug it in through configuration. There may also be specific ways of customizing a project, such as creating a new scheduler driver for Compute or a custom tab for the dashboard."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:44(para)
msgid "This chapter focuses on the second path for customizing OpenStack by providing two examples for writing new features. The first example shows how to modify Object Storage (swift) middleware to add a new feature, and the second example provides a new scheduler feature for OpenStack Compute (nova). To customize OpenStack this way you need a development environment. The best way to get an environment up and running quickly is to run DevStack within your cloud."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:53(title)
msgid "Create an OpenStack Development Environment"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:55(para)
msgid "To create a development environment, you can use DevStack. DevStack is essentially a collection of shell scripts and configuration files that builds an OpenStack development environment for you. You use it to create such an environment for developing a new feature.<indexterm class=\"singular\"><primary>customization</primary><secondary>development environment creation for</secondary></indexterm><indexterm class=\"singular\"><primary>development environments, creating</primary></indexterm><indexterm class=\"singular\"><primary>DevStack</primary><secondary>development environment creation</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:71(para)
msgid "You can find all of the documentation at the <link href=\"http://devstack.org/\">DevStack</link> website."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:75(title)
msgid "To run DevStack for the stable Havana branch on an instance in your OpenStack cloud:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:79(para)
msgid "Boot an instance from the dashboard or the nova command-line interface (CLI) with the following parameters:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:84(para)
msgid "Name: devstack-havana"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:88(para)
msgid "Image: Ubuntu 12.04 LTS"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:92(para)
msgid "Memory Size: 4 GB RAM"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:96(para)
msgid "Disk Size: minimum 5 GB"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:100(para)
msgid "If you are using the <code>nova</code> client, specify <code>--flavor 3</code> for the <code>nova boot</code> command to get adequate memory and disk sizes."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:106(para)
msgid "Log in and set up DevStack. Here's an example of the commands you can use to set up DevStack on a virtual machine:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:113(replaceable)
msgid "username"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:113(replaceable)
msgid "my.instance.ip.address"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:111(para)
msgid "Log in to the instance: <placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:117(para)
msgid "Update the virtual machine's operating system: <placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:123(para)
msgid "Install git: <placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:129(para)
msgid "Clone the stable/havana branch of the <literal>devstack</literal> repository: <placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:137(para)
msgid "Change to the <literal>devstack</literal> repository: <placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:145(para)
msgid "(Optional) If you've logged in to your instance as the root user, you must create a \"stack\" user; otherwise you'll run into permission issues. If you've logged in as a user other than root, you can skip these steps:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:152(para)
msgid "Run the DevStack script to create the stack user:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:158(para)
msgid "Give ownership of the <literal>devstack</literal> directory to the stack user:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:165(para)
msgid "Set some permissions you can use to view the DevStack screen later:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:172(para)
msgid "Switch to the stack user:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:180(para)
msgid "Edit the <filename>localrc</filename> configuration file that controls what DevStack will deploy. Copy the example <filename>localrc</filename> file at the end of this section (<xref linkend=\"localrc\"/>):"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:189(para)
msgid "Run the stack script that will install OpenStack: <placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:195(para)
msgid "When the stack script is done, you can open the screen session it started to view all of the running OpenStack services: <placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:202(para)
msgid "Press <keycombo><keycap>Ctrl</keycap><keycap>A</keycap></keycombo> followed by 0 to go to the first <literal>screen</literal> window."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:214(para)
msgid "The <code>stack.sh</code> script takes a while to run. Perhaps you can take this opportunity to <link href=\"https://www.openstack.org/join/\">join the OpenStack Foundation</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:221(para)
msgid "<literal>Screen</literal> is a useful program for viewing many related services at once. For more information, see the <link href=\"http://aperiodic.net/screen/quick_reference\">GNU screen quick reference</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:229(para)
msgid "Now that you have an OpenStack development environment, you're free to hack around without worrying about damaging your production deployment. <xref linkend=\"localrc\"/> provides a working environment for running OpenStack Identity, Compute, Block Storage, Image service, the OpenStack dashboard, and Object Storage with the stable/havana branches as the starting point."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:237(title)
msgid "localrc"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:282(title)
msgid "Customizing Object Storage (Swift) Middleware"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:284(para)
msgid "OpenStack Object Storage, known as swift when reading the code, is based on the Python <link href=\"http://pythonpaste.org/\">Paste</link> framework. The best introduction to its architecture is <link href=\"http://pythonpaste.org/do-it-yourself-framework.html\">A Do-It-Yourself Framework</link>. Because of the swift project's use of this framework, you are able to add features to a project by placing some custom code in a project's pipeline without having to change any of the core code.<indexterm class=\"singular\"><primary>Paste framework</primary></indexterm><indexterm class=\"singular\"><primary>Python</primary></indexterm><indexterm class=\"singular\"><primary>swift</primary><secondary>swift middleware</secondary></indexterm><indexterm class=\"singular\"><primary>Object Storage</primary><secondary>customization of</secondary></indexterm><indexterm class=\"singular\"><primary>customization</primary><secondary>Object Storage</secondary></indexterm><indexterm class=\"singular\"><primary>DevStack</primary><secondary>customizing Object Storage (swift)</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:313(para)
msgid "Imagine a scenario where you have public access to one of your containers, but what you really want is to restrict access to that to a set of IPs based on a whitelist. In this example, we'll create a piece of middleware for swift that allows access to a container from only a set of IP addresses, as determined by the container's metadata items. Only those IP addresses that you explicitly whitelist using the container's metadata will be able to access the container."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:322(para)
msgid "This example is for illustrative purposes only. It should not be used as a container IP whitelist solution without further development and extensive security testing.<indexterm class=\"singular\"><primary>security issues</primary><secondary>middleware example</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:331(para)
msgid "When you join the screen session that <code>stack.sh</code> starts with <code>screen -r stack</code>, you see a screen for each service running, which can be a few or several, depending on how many services you configured DevStack to run."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:336(para)
msgid "The asterisk * indicates which screen window you are viewing. This example shows we are viewing the key (for keystone) screen window:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:341(para)
msgid "The purpose of the screen windows are as follows:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:345(code) ./doc/openstack-ops/ch_ops_customize.xml:824(code)
msgid "shell"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:348(para) ./doc/openstack-ops/ch_ops_customize.xml:827(para)
msgid "A shell where you can get some work done"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:353(code)
msgid "key*"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:356(para) ./doc/openstack-ops/ch_ops_customize.xml:835(para)
msgid "The keystone service"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:361(code) ./doc/openstack-ops/ch_ops_customize.xml:840(code) ./doc/openstack-ops/ch_ops_log_monitor.xml:95(para)
msgid "horizon"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:364(para) ./doc/openstack-ops/ch_ops_customize.xml:843(para)
msgid "The horizon dashboard web application"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:369(code)
msgid "s-{name}"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:372(para)
msgid "The swift services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:378(title)
msgid "To create the middleware and plug it in through Paste configuration:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:381(para)
msgid "All of the code for OpenStack lives in <code>/opt/stack</code>. Go to the swift directory in the <code>shell</code> screen and edit your middleware module."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:386(para)
msgid "Change to the directory where Object Storage is installed:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:393(para)
msgid "Create the <literal>ip_whitelist.py</literal> Python source code file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:400(para)
msgid "Copy the code in <xref linkend=\"ip_whitelist\"/> into <filename>ip_whitelist.py</filename>. The following code is a middleware example that restricts access to a container based on IP address as explained at the beginning of the section. Middleware passes the request on to another application. This example uses the swift \"swob\" library to wrap Web Server Gateway Interface (WSGI) requests and responses into objects for swift to interact with. When you're done, save and close the file."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:410(title)
msgid "ip_whitelist.py"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:507(para)
msgid "There is a lot of useful information in <code>env</code> and <code>conf</code> that you can use to decide what to do with the request. To find out more about what properties are available, you can insert the following log statement into the <code>__init__</code> method:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:515(para)
msgid "and the following log statement into the <code>__call__</code> method:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:522(para)
msgid "To plug this middleware into the swift Paste pipeline, you edit one configuration file, <filename>/etc/swift/proxy-server.conf</filename>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:530(para)
msgid "Find the <code>[filter:ratelimit]</code> section in <filename>/etc/swift/proxy-server.conf</filename>, and copy in the following configuration section after it:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:546(para)
msgid "Find the <code>[pipeline:main]</code> section in <filename>/etc/swift/proxy-server.conf</filename>, and add <code>ip_whitelist</code> after ratelimit to the list like so. When you're done, save and close the file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:556(para)
msgid "Restart the <literal>swift proxy</literal> service to make swift use your middleware. Start by switching to the <literal>swift-proxy</literal> screen:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:562(para)
msgid "Press <keycombo><keycap>Ctrl</keycap><keycap>A</keycap></keycombo> followed by <keycap>3</keycap>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:570(para) ./doc/openstack-ops/ch_ops_customize.xml:1069(para)
msgid "Press <keycombo><keycap>Ctrl</keycap><keycap>C</keycap></keycombo> to kill the service."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:578(para) ./doc/openstack-ops/ch_ops_customize.xml:1077(para)
msgid "Press Up Arrow to bring up the last command."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:582(para) ./doc/openstack-ops/ch_ops_customize.xml:1081(para)
msgid "Press Enter to run it."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:588(para)
msgid "Test your middleware with the <code>swift</code> CLI. Start by switching to the shell screen and finish by switching back to the <code>swift-proxy</code> screen to check the log output:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:594(para)
msgid "Press <keycombo><keycap>Ctrl</keycap><keycap>A</keycap></keycombo> followed by 0."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:602(para)
msgid "Make sure you're in the <literal>devstack</literal> directory:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:609(para)
msgid "Source openrc to set up your environment variables for the CLI:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:616(para)
msgid "Create a container called <literal>middleware-test</literal>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:623(para)
msgid "Press <keycombo><keycap>Ctrl</keycap><keycap>A</keycap></keycombo> followed by <keycap>3</keycap> to check the log output."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:634(para)
msgid "Among the log statements you'll see the lines:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:639(para)
msgid "These two statements are produced by our middleware and show that the request was sent from our DevStack instance and was allowed."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:645(para)
msgid "Test the middleware from outside DevStack on a remote machine that has access to your DevStack instance:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:650(para)
msgid "Install the <code>keystone</code> and <code>swift</code> clients on your local machine:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:657(para)
msgid "Attempt to list the objects in the <literal>middleware-test</literal> container:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:670(para)
msgid "Press <keycombo><keycap>Ctrl</keycap><keycap>A</keycap></keycombo> followed by <keycap>3</keycap> to check the log output. Look at the swift log statements again, and among the log statements, you'll see the lines:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:685(para)
msgid "Here we can see that the request was denied because the remote IP address wasn't in the set of allowed IPs."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:690(para)
msgid "Back in your DevStack instance on the shell screen, add some metadata to your container to allow the request from the remote machine:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:696(para)
msgid "Press <keycombo><keycap>Ctrl</keycap><keycap>A</keycap></keycombo> followed by <keycap>0</keycap>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:704(para)
msgid "Add metadata to the container to allow the IP:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:710(para)
msgid "Now try the command from Step 10 again and it succeeds. There are no objects in the container, so there is nothing to list; however, there is also no error to report."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:719(para)
msgid "Functional testing like this is not a replacement for proper unit and integration testing, but it serves to get you started.<indexterm class=\"singular\"><primary>testing</primary><secondary>functional testing</secondary></indexterm><indexterm class=\"singular\"><primary>functional testing</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:730(para)
msgid "You can follow a similar pattern in other projects that use the Python Paste framework. Simply create a middleware module and plug it in through configuration. The middleware runs in sequence as part of that project's pipeline and can call out to other services as necessary. No project core code is touched. Look for a <code>pipeline</code> value in the project's <code>conf</code> or <code>ini</code> configuration files in <code>/etc/&lt;project&gt;</code> to identify projects that use Paste."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:739(para)
msgid "When your middleware is done, we encourage you to open source it and let the community know on the OpenStack mailing list. Perhaps others need the same functionality. They can use your code, provide feedback, and possibly contribute. If enough support exists for it, perhaps you can propose that it be added to the official swift <link href=\"https://github.com/openstack/swift/tree/master/swift/common/middleware\">middleware</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:748(title)
msgid "Customizing the OpenStack Compute (nova) Scheduler"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:750(para)
msgid "Many OpenStack projects allow for customization of specific features using a driver architecture. You can write a driver that conforms to a particular interface and plug it in through configuration. For example, you can easily plug in a new scheduler for Compute. The existing schedulers for Compute are feature full and well documented at <link href=\"http://docs.openstack.org/kilo/config-reference/content/section_compute-scheduler.html\">Scheduling</link>. However, depending on your user's use cases, the existing schedulers might not meet your requirements. You might need to create a new scheduler.<indexterm class=\"singular\"><primary>customization</primary><secondary>OpenStack Compute (nova) Scheduler</secondary></indexterm><indexterm class=\"singular\"><primary>schedulers</primary><secondary>customization of</secondary></indexterm><indexterm class=\"singular\"><primary>DevStack</primary><secondary>customizing OpenStack Compute (nova) scheduler</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:772(para)
msgid "To create a scheduler, you must inherit from the class <code>nova.scheduler.driver.Scheduler</code>. Of the five methods that you can override, you <emphasis>must</emphasis> override the two methods marked with an asterisk (*) below:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:779(code)
msgid "update_service_capabilities"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:783(code)
msgid "hosts_up"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:787(code)
msgid "group_hosts"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:791(para)
msgid "* <code>schedule_run_instance</code>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:795(para)
msgid "* <code>select_destinations</code>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:799(para)
msgid "To demonstrate customizing OpenStack, we'll create an example of a Compute scheduler that randomly places an instance on a subset of hosts, depending on the originating IP address of the request and the prefix of the hostname. Such an example could be useful when you have a group of users on a subnet and you want all of their instances to start within some subset of your hosts."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:807(para)
msgid "This example is for illustrative purposes only. It should not be used as a scheduler for Compute without further development and <phrase role=\"keep-together\">testing</phrase>.<indexterm class=\"singular\"><primary>security issues</primary><secondary>scheduler example</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:816(para)
msgid "When you join the screen session that <code>stack.sh</code> starts with <code>screen -r stack</code>, you are greeted with many screen windows:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:832(code) ./doc/openstack-ops/ch_ops_projects_users.xml:396(replaceable)
msgid "key"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:848(code)
msgid "n-{name}"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:851(para)
msgid "The nova services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:856(code)
msgid "n-sch"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:859(para)
msgid "The nova scheduler service"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:865(title)
msgid "To create the scheduler and plug it in through configuration:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:869(para)
msgid "The code for OpenStack lives in <code>/opt/stack</code>, so go to the <literal>nova</literal> directory and edit your scheduler module. Change to the directory where <literal>nova</literal> is installed:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:878(para)
msgid "Create the <filename>ip_scheduler.py</filename> Python source code file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:885(para)
msgid "The code in <xref linkend=\"ip_scheduler\"/> is a driver that will schedule servers to hosts based on IP address as explained at the beginning of the section. Copy the code into <filename>ip_scheduler.py</filename>. When you're done, save and close the file."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:892(title)
msgid "ip_scheduler.py"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:1020(para)
msgid "There is a lot of useful information in <code>context</code>, <code>request_spec</code>, and <code>filter_properties</code> that you can use to decide where to schedule the instance. To find out more about what properties are available, you can insert the following log statements into the <code>schedule_run_instance</code> method of the scheduler above:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:1033(para)
msgid "To plug this scheduler into nova, edit one configuration file, <filename>/etc/nova/nova.conf</filename>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:1040(para)
msgid "Find the <code>scheduler_driver</code> config and change it like so:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:1047(para)
msgid "Restart the nova scheduler service to make nova use your scheduler. Start by switching to the <code>n-sch</code> screen:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:1052(para)
msgid "Press <keycombo><keycap>Ctrl</keycap><keycap>A</keycap></keycombo> followed by <keycap>9</keycap>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:1060(para)
msgid "Press <keycombo><keycap>Ctrl</keycap><keycap>A</keycap></keycombo> followed by <keycap>N</keycap> until you reach the <code>n-sch</code> screen."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:1087(para)
msgid "Test your scheduler with the nova CLI. Start by switching to the <code>shell</code> screen and finish by switching back to the <code>n-sch</code> screen to check the log output:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:1093(para)
msgid "Press <keycombo><keycap>Ctrl</keycap><keycap>A</keycap></keycombo> followed by <keycap>0</keycap>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:1101(para)
msgid "Make sure you're in the <filename>devstack</filename> directory:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:1108(para)
msgid "Source <filename>openrc</filename> to set up your environment variables for the CLI:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:1115(para)
msgid "Put the image ID for the only installed image into an environment variable:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:1122(para)
msgid "Boot a test server:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:1130(para)
msgid "Switch back to the <code>n-sch</code> screen. Among the log statements, you'll see the line:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:1141(para)
msgid "Functional testing like this is not a replacement for proper unit and integration testing, but it serves to get you started."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:1145(para)
msgid "A similar pattern can be followed in other projects that use the driver architecture. Simply create a module and class that conform to the driver interface and plug it in through configuration. Your code runs when that feature is used and can call out to other services as necessary. No project core code is touched. Look for a \"driver\" value in the project's <filename>.conf</filename> configuration files in <code>/etc/&lt;project&gt;</code> to identify projects that use a driver architecture."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:1154(para)
msgid "When your scheduler is done, we encourage you to open source it and let the community know on the OpenStack mailing list. Perhaps others need the same functionality. They can use your code, provide feedback, and possibly contribute. If enough support exists for it, perhaps you can propose that it be added to the official Compute <link href=\"https://github.com/openstack/nova/tree/master/nova/scheduler\">schedulers</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:1163(title)
msgid "Customizing the Dashboard (Horizon)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:1165(para)
msgid "The dashboard is based on the Python <link href=\"https://www.djangoproject.com/\">Django</link> web application framework. The best guide to customizing it has already been written and can be found at <link href=\"http://docs.openstack.org/developer/horizon/topics/tutorial.html\">Building on Horizon</link>.<indexterm class=\"singular\"><primary>Django</primary></indexterm><indexterm class=\"singular\"><primary>Python</primary></indexterm><indexterm class=\"singular\"><primary>dashboard</primary></indexterm><indexterm class=\"singular\"><primary>DevStack</primary><secondary>customizing dashboard</secondary></indexterm><indexterm class=\"singular\"><primary>customization</primary><secondary>dashboard</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:1187(title) ./doc/openstack-ops/ch_arch_storage.xml:789(title) ./doc/openstack-ops/ch_arch_compute_nodes.xml:610(title) ./doc/openstack-ops/ch_arch_network_design.xml:526(title) ./doc/openstack-ops/ch_arch_provision.xml:368(title)
msgid "Conclusion"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:1189(para)
msgid "When operating an OpenStack cloud, you may discover that your users can be quite demanding. If OpenStack doesn't do what your users need, it may be up to you to fulfill those requirements. This chapter provided you with some options for customization and gave you the tools you need to get started."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:12(title)
msgid "Upstream OpenStack"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:14(para)
msgid "OpenStack is founded on a thriving community that is a source of help and welcomes your contributions. This chapter details some of the ways you can interact with the others involved."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:19(title)
msgid "Getting Help"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:21(para)
msgid "There are several avenues available for seeking assistance. The quickest way is to help the community help you. Search the Q&amp;A sites, mailing list archives, and bug lists for issues similar to yours. If you can't find anything, follow the directions for reporting bugs or use one of the channels for support, which are listed below.<indexterm class=\"singular\"><primary>mailing lists</primary></indexterm><indexterm class=\"singular\"><primary>OpenStack</primary><secondary>documentation</secondary></indexterm><indexterm class=\"singular\"><primary>help, resources for</primary></indexterm><indexterm class=\"singular\"><primary>troubleshooting</primary><secondary>getting help</secondary></indexterm><indexterm class=\"singular\"><primary>OpenStack community</primary><secondary>getting help from</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:44(para)
msgid "Your first port of call should be the official OpenStack documentation, found on <link href=\"http://docs.openstack.org\"/>. You can get questions answered on <link href=\"http://ask.openstack.org\"/>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:49(para)
msgid "<link href=\"https://wiki.openstack.org/wiki/Mailing_Lists\">Mailing lists</link> are also a great place to get help. The wiki page has more information about the various lists. As an operator, the main lists you should be aware of are:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:56(link)
msgid "General list"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:60(para)
msgid "<emphasis>openstack@lists.openstack.org</emphasis>. The scope of this list is the current state of OpenStack. This is a very high-traffic mailing list, with many, many emails per day."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:67(link)
msgid "Operators list"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:71(para)
msgid "<emphasis>openstack-operators@lists.openstack.org.</emphasis> This list is intended for discussion among existing OpenStack cloud operators, such as yourself. Currently, this list is relatively low traffic, on the order of one email a day."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:79(link)
msgid "Development list"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:83(para)
msgid "<emphasis>openstack-dev@lists.openstack.org</emphasis>. The scope of this list is the future state of OpenStack. This is a high-traffic mailing list, with multiple emails per day."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:90(para)
msgid "We recommend that you subscribe to the general list and the operator list, although you must set up filters to manage the volume for the general list. You'll also find links to the mailing list archives on the mailing list wiki page, where you can search through the discussions."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:96(para)
msgid "<link href=\"https://wiki.openstack.org/wiki/IRC\">Multiple IRC channels</link> are available for general questions and developer discussions. The general discussion channel is #openstack on <emphasis>irc.freenode.net</emphasis>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:103(title)
msgid "Reporting Bugs"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:105(para)
msgid "As an operator, you are in a very good position to report unexpected behavior with your cloud. Since OpenStack is flexible, you may be the only individual to report a particular issue. Every issue is important to fix, so it is essential to learn how to easily submit a bug report.<indexterm class=\"singular\"><primary>maintenance/debugging</primary><secondary>reporting bugs</secondary></indexterm><indexterm class=\"singular\"><primary>bugs, reporting</primary></indexterm><indexterm class=\"singular\"><primary>OpenStack community</primary><secondary>reporting bugs</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:121(para)
msgid "All OpenStack projects use <link href=\"https://launchpad.net/\">Launchpad</link> for bug tracking. You'll need to create an account on Launchpad before you can submit a bug report."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:126(para)
msgid "Once you have a Launchpad account, reporting a bug is as simple as identifying the project or projects that are causing the issue. Sometimes this is more difficult than expected, but those working on the bug triage are happy to help relocate issues if they are not in the right place initially:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:134(para)
msgid "Report a bug in <link href=\"https://bugs.launchpad.net/nova/+filebug/+login\">nova</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:139(para)
msgid "Report a bug in <link href=\"https://bugs.launchpad.net/python-novaclient/+filebug/+login\">python-novaclient</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:144(para)
msgid "Report a bug in <link href=\"https://bugs.launchpad.net/swift/+filebug/+login\">swift</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:149(para)
msgid "Report a bug in <link href=\"https://bugs.launchpad.net/python-swiftclient/+filebug/+login\">python-swiftclient</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:154(para)
msgid "Report a bug in <link href=\"https://bugs.launchpad.net/glance/+filebug/+login\">glance</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:159(para)
msgid "Report a bug in <link href=\"https://bugs.launchpad.net/python-glanceclient/+filebug/+login\">python-glanceclient</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:164(para)
msgid "Report a bug in <link href=\"https://bugs.launchpad.net/keystone/+filebug/+login\">keystone</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:169(para)
msgid "Report a bug in <link href=\"https://bugs.launchpad.net/python-keystoneclient/+filebug/+login\">python-keystoneclient</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:174(para)
msgid "Report a bug in <link href=\"https://bugs.launchpad.net/neutron/+filebug/+login\">neutron</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:179(para)
msgid "Report a bug in <link href=\"https://bugs.launchpad.net/python-neutronclient/+filebug/+login\">python-neutronclient</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:184(para)
msgid "Report a bug in <link href=\"https://bugs.launchpad.net/cinder/+filebug/+login\">cinder</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:189(para)
msgid "Report a bug in <link href=\"https://bugs.launchpad.net/python-cinderclient/+filebug/+login\">python-cinderclient</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:194(para)
msgid "Report a bug in <link href=\"https://bugs.launchpad.net/horizon/+filebug/+login\">horizon</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:199(para)
msgid "Report a bug with the <link href=\"https://bugs.launchpad.net/openstack-manuals/+filebug/+login\">documentation</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:204(para)
msgid "Report a bug with the <link href=\"https://bugs.launchpad.net/openstack-api-site/+filebug/+login\">API documentation</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:209(para)
msgid "To write a good bug report, the following process is essential. First, search for the bug to make sure there is no bug already filed for the same issue. If you find one, be sure to click on \"This bug affects X people. Does this bug affect you?\" If you can't find the issue, then enter the details of your report. It should at least include:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:217(para)
msgid "The release, or milestone, or commit ID corresponding to the software that you are running"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:222(para)
msgid "The operating system and version where you've identified the bug"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:227(para)
msgid "Steps to reproduce the bug, including what went wrong"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:231(para)
msgid "Description of the expected results instead of what you saw"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:236(para)
msgid "Portions of your log files so that you include only relevant excerpts"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:241(para)
msgid "When you do this, the bug is created with:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:245(para)
msgid "Status: <emphasis>New</emphasis>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:249(para)
msgid "In the bug comments, you can contribute instructions on how to fix a given bug, and set it to <emphasis>Triaged</emphasis>. Or you can directly fix it: assign the bug to yourself, set it to <emphasis>In progress</emphasis>, branch the code, implement the fix, and propose your change for merging. But let's not get ahead of ourselves; there are bug triaging tasks as well."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:257(title)
msgid "Confirming and Prioritizing"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:259(para)
msgid "This stage is about checking that a bug is real and assessing its impact. Some of these steps require bug supervisor rights (usually limited to core teams). If the bug lacks information to properly reproduce or assess the importance of the bug, the bug is set to:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:266(para)
msgid "Status: <emphasis>Incomplete</emphasis>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:270(para)
msgid "Once you have reproduced the issue (or are 100 percent confident that this is indeed a valid bug) and have permissions to do so, set:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:276(para)
msgid "Status: <emphasis>Confirmed</emphasis>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:280(para)
msgid "Core developers also prioritize the bug, based on its impact:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:285(para)
msgid "Importance: &lt;Bug impact&gt;"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:289(para)
msgid "The bug impacts are categorized as follows:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:295(para)
msgid "<emphasis>Critical</emphasis> if the bug prevents a key feature from working properly (regression) for all users (or without a simple workaround) or results in data loss"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:301(para)
msgid "<emphasis>High</emphasis> if the bug prevents a key feature from working properly for some users (or with a workaround)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:306(para)
msgid "<emphasis>Medium</emphasis> if the bug prevents a secondary feature from working properly"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:311(para)
msgid "<emphasis>Low</emphasis> if the bug is mostly cosmetic"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:315(para)
msgid "<emphasis>Wishlist</emphasis> if the bug is not really a bug but rather a welcome change in behavior"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:320(para)
msgid "If the bug contains the solution, or a patch, set the bug status to <emphasis>Triaged</emphasis>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:325(title)
msgid "Bug Fixing"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:327(para)
msgid "At this stage, a developer works on a fix. During that time, to avoid duplicating the work, the developer should set:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:332(para)
msgid "Status: <emphasis>In Progress</emphasis>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:336(para)
msgid "Assignee: &lt;yourself&gt;"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:340(para)
msgid "When the fix is ready, the developer proposes a change and gets the change reviewed."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:345(title)
msgid "After the Change Is Accepted"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:347(para)
msgid "After the change is reviewed, accepted, and lands in master, it automatically moves to:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:352(para)
msgid "Status: <emphasis>Fix Committed</emphasis>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:356(para)
msgid "When the fix makes it into a milestone or release branch, it automatically moves to:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:361(para)
msgid "Milestone: Milestone the bug was fixed in"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:365(para)
msgid "Status: <emphasis>Fix Released</emphasis>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:372(title)
msgid "Join the OpenStack Community"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:374(para)
msgid "Since you've made it this far in the book, you should consider becoming an official individual member of the community and <link href=\"https://www.openstack.org/join/\">join the OpenStack Foundation</link>. The OpenStack Foundation is an independent body providing shared resources to help achieve the OpenStack mission by protecting, empowering, and promoting OpenStack software and the community around it, including users, developers, and the entire ecosystem. We all share the responsibility to make this community the best it can possibly be, and signing up to be a member is the first step to participating. Like the software, individual membership within the OpenStack Foundation is free and accessible to anyone.<indexterm class=\"singular\"><primary>OpenStack community</primary><secondary>joining</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:392(title)
msgid "How to Contribute to the Documentation"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:394(para)
msgid "OpenStack documentation efforts encompass operator and administrator docs, API docs, and user docs.<indexterm class=\"singular\"><primary>OpenStack community</primary><secondary>contributing to</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:401(para)
msgid "The genesis of this book was an in-person event, but now that the book is in your hands, we want you to contribute to it. OpenStack documentation follows the coding principles of iterative work, with bug logging, investigating, and fixing."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:406(para)
msgid "Just like the code, <link href=\"http://docs.openstack.org\"/> is updated constantly using the Gerrit review system, with source stored in GitHub in the <link href=\"https://github.com/openstack/openstack-manuals/\">openstack-manuals repository</link> and the <link href=\"https://github.com/openstack/api-site/\">api-site repository</link>, in DocBook format."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:413(para)
msgid "To review the documentation before it's published, go to the OpenStack Gerrit server at <link href=\"http://review.openstack.org\"/> and search for <link href=\"https://review.openstack.org/#/q/status:open+project:openstack/openstack-manuals,n,z\">project:openstack/openstack-manuals</link> or <link href=\"https://review.openstack.org/#/q/status:open+project:openstack/api-site,n,z\">project:openstack/api-site</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:420(para)
msgid "See the <link href=\"https://wiki.openstack.org/wiki/How_To_Contribute\">How To Contribute page on the wiki</link> for more information on the steps you need to take to submit your first documentation review or change."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:426(title)
msgid "Security Information"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:428(para)
msgid "As a community, we take security very seriously and follow a specific process for reporting potential issues. We vigilantly pursue fixes and regularly eliminate exposures. You can report security issues you discover through this specific process. The OpenStack Vulnerability Management Team is a very small group of experts in vulnerability management drawn from the OpenStack community. The team's job is facilitating the reporting of vulnerabilities, coordinating security fixes and handling progressive disclosure of the vulnerability information. Specifically, the team is responsible for the following functions:<indexterm class=\"singular\"><primary>vulnerability tracking/management</primary></indexterm><indexterm class=\"singular\"><primary>security issues</primary><secondary>reporting/fixing vulnerabilities</secondary></indexterm><indexterm class=\"singular\"><primary>OpenStack community</primary><secondary>security information</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:451(term)
msgid "Vulnerability management"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:454(para)
msgid "All vulnerabilities discovered by community members (or users) can be reported to the team."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:460(term)
msgid "Vulnerability tracking"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:463(para)
msgid "The team will curate a set of vulnerability related issues in the issue tracker. Some of these issues are private to the team and the affected product leads, but once remediation is in place, all vulnerabilities are public."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:471(term)
msgid "Responsible disclosure"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:474(para)
msgid "As part of our commitment to work with the security community, the team ensures that proper credit is given to security researchers who responsibly report issues in OpenStack."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:481(para)
msgid "We provide two ways to report issues to the OpenStack Vulnerability Management Team, depending on how sensitive the issue is:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:486(para)
msgid "Open a bug in Launchpad and mark it as a \"security bug.\" This makes the bug private and accessible to only the Vulnerability Management Team."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:492(para)
msgid "If the issue is extremely sensitive, send an encrypted email to one of the team's members. Find their GPG keys at <link href=\"http://www.openstack.org/projects/openstack-security/\">OpenStack Security</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:499(para)
msgid "You can find the full list of security-oriented teams you can join at <link href=\"https://wiki.openstack.org/wiki/SecurityTeams\">Security Teams</link>. The vulnerability management process is fully documented at <link href=\"https://wiki.openstack.org/wiki/VulnerabilityManagement\">Vulnerability Management</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:507(title)
msgid "Finding Additional Information"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:509(para)
msgid "In addition to this book, there are many other sources of information about OpenStack. The <link href=\"http://www.openstack.org/\">OpenStack website</link> is a good starting point, with <link href=\"http://docs.openstack.org/\">OpenStack Docs</link> and <link href=\"http://developer.openstack.org/\">OpenStack API Docs</link> providing technical documentation about OpenStack. The <link href=\"https://wiki.openstack.org/wiki/Main_Page\">OpenStack wiki</link> contains a lot of general information that cuts across the OpenStack projects, including a list of <link href=\"https://wiki.openstack.org/wiki/OperationsTools\">recommended tools</link>. Finally, there are a number of blogs aggregated at <link href=\"http://planet.openstack.org/\">Planet OpenStack</link>.<indexterm class=\"singular\"><primary>OpenStack community</primary><secondary>additional information</secondary></indexterm>"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/part_architecture.xml:82(None)
msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_0001.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:10(title)
msgid "Architecture"
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:13(para)
msgid "Designing an OpenStack cloud is a great achievement. It requires a robust understanding of the requirements and needs of the cloud's users to determine the best possible configuration to meet them. OpenStack provides a great deal of flexibility to achieve your needs, and this part of the book aims to shine light on many of the decisions you need to make during the process."
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:20(para)
msgid "To design, deploy, and configure OpenStack, administrators must understand the logical architecture. A diagram can help you envision all the integrated services within OpenStack and how they interact with each other.<indexterm class=\"singular\"><primary>modules, types of</primary></indexterm><indexterm class=\"singular\"><primary>OpenStack</primary><secondary>module types in</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:31(para)
msgid "OpenStack modules are one of the following types:"
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:35(term)
msgid "Daemon"
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:38(para)
msgid "Runs as a background process. On Linux platforms, a daemon is usually installed as a service.<indexterm class=\"singular\"><primary>daemons</primary><secondary>basics of</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:48(term)
msgid "Script"
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:51(para)
msgid "Installs a virtual environment and runs tests.<indexterm class=\"singular\"><primary>script modules</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:59(term)
msgid "Command-line interface (CLI)"
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:62(para)
msgid "Enables users to submit API calls to OpenStack services through commands.<indexterm class=\"singular\"><primary>Command-line interface (CLI)</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:70(para)
msgid "As shown, end users can interact through the dashboard, CLIs, and APIs. All services authenticate through a common Identity Service, and individual services interact with each other through public APIs, except where privileged administrator commands are necessary. <xref linkend=\"openstack-diagram\"/> shows the most common, but not the only logical architecture for an OpenStack cloud."
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:77(title)
msgid "OpenStack Logical Architecture (<link href=\"http://docs.openstack.org/openstack-ops/content/figures/2/figures/osog_0001.png\"/>)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:16(link) ./doc/openstack-ops/preface_ops.xml:171(link)
msgid "Installation Guide for openSUSE 13.2 and SUSE Linux Enterprise Server 12"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:21(link) ./doc/openstack-ops/preface_ops.xml:177(link)
msgid "Installation Guide for Red Hat Enterprise Linux 7, CentOS 7, and Fedora 21"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:27(link) ./doc/openstack-ops/preface_ops.xml:183(link)
msgid "Installation Guide for Ubuntu 14.04 (LTS) Server"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:31(link) ./doc/openstack-ops/preface_ops.xml:202(link)
msgid "OpenStack Cloud Administrator Guide"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:35(emphasis)
msgid "OpenStack Cloud Computing Cookbook"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:35(link)
msgid "<placeholder-1/> (Packt Publishing)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:41(title)
msgid "Cloud (General)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:44(link)
msgid "“The NIST Definition of Cloud Computing”"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:50(title)
msgid "Python"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:53(emphasis)
msgid "Dive Into Python"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:53(link) ./doc/openstack-ops/ch_ops_resources.xml:103(link)
msgid "<placeholder-1/> (Apress)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:59(title) ./doc/openstack-ops/ch_arch_compute_nodes.xml:599(title)
msgid "Networking"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:62(emphasis)
msgid "TCP/IP Illustrated, Volume 1: The Protocols, 2/E"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:62(link)
msgid "<placeholder-1/> (Pearson)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:67(emphasis)
msgid "The TCP/IP Guide"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:67(link) ./doc/openstack-ops/ch_ops_resources.xml:90(link)
msgid "<placeholder-1/> (No Starch Press)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:71(link)
msgid "“A <placeholder-1/> Tutorial and Primer”"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:77(title)
msgid "Systems Administration"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:80(emphasis)
msgid "UNIX and Linux Systems Administration Handbook"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:80(link)
msgid "<placeholder-1/> (Prentice Hall)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:87(title)
msgid "Virtualization"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:90(emphasis)
msgid "The Book of Xen"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:96(title) ./doc/openstack-ops/ch_ops_maintenance.xml:859(title)
msgid "Configuration Management"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:99(link)
msgid "Puppet Labs Documentation"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:103(emphasis)
msgid "Pro Puppet"
msgstr ""
#: ./doc/openstack-ops/ch_arch_examples.xml:12(title)
msgid "Example Architectures"
msgstr ""
#: ./doc/openstack-ops/ch_arch_examples.xml:14(para)
msgid "To understand the possibilities OpenStack offers, it's best to start with basic architectures that are tried-and-true and have been tested in production environments. We offer two such examples with basic pivots on the base operating system (Ubuntu and Red Hat Enterprise Linux) and the networking architectures. There are other differences between these two examples, but you should find the considerations made for the choices in each as well as a rationale for why it worked well in a given environment."
msgstr ""
#: ./doc/openstack-ops/ch_arch_examples.xml:23(para)
msgid "Because OpenStack is highly configurable, with many different backends and network configuration options, it is difficult to write documentation that covers all possible OpenStack deployments. Therefore, this guide defines example architectures to simplify the task of documenting, as well as to provide the scope for this guide. Both of the offered architecture examples are currently running in production and serving users."
msgstr ""
#: ./doc/openstack-ops/ch_arch_examples.xml:31(para)
msgid "As always, refer to the <xref linkend=\"openstack_glossary\"/> if you are unclear about any of the terminology mentioned in these architectures."
msgstr ""
#: ./doc/openstack-ops/ch_arch_examples.xml:41(title)
msgid "Parting Thoughts on Architectures"
msgstr ""
#: ./doc/openstack-ops/ch_arch_examples.xml:43(para)
msgid "With so many considerations and options available, our hope is to provide a few clearly-marked and tested paths for your OpenStack exploration. If you're looking for additional ideas, check out <xref linkend=\"use-cases\"/>, the <link href=\"http://docs.openstack.org/\">OpenStack Installation Guides</link>, or the <link href=\"http://www.openstack.org/user-stories/\">OpenStack User Stories page</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:10(title)
msgid "User-Facing Operations"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:12(para)
msgid "This guide is for OpenStack operators and does not seek to be an exhaustive reference for users, but as an operator, you should have a basic understanding of how to use the cloud facilities. This chapter looks at OpenStack from a basic user perspective, which helps you understand your users' needs and determine, when you get a trouble ticket, whether it is a user issue or a service issue. The main concepts covered are images, flavors, security groups, block storage, and instances."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:21(title) ./doc/openstack-ops/ch_ops_user_facing.xml:1165(para) ./doc/openstack-ops/ch_arch_cloud_controller.xml:585(title)
msgid "Images"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:25(para)
msgid "OpenStack images can often be thought of as \"virtual machine templates.\" Images can also be standard installation media such as ISO images. Essentially, they contain bootable file systems that are used to launch instances.<indexterm class=\"singular\"><primary>user training</primary><secondary>images</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:35(title)
msgid "Adding Images"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:37(para)
msgid "Several premade images exist and can easily be imported into the Image service. A common image to add is the CirrOS image, which is very small and used for testing purposes.<indexterm class=\"singular\"><primary>images</primary><secondary>adding</secondary></indexterm> To add this image, simply do:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:49(para)
msgid "The <code>glance image-create</code> command provides a large set of options for working with your image. For example, the <code>min-disk</code> option is useful for images that require root disks of a certain size (for example, large Windows images). To view these options, do:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:57(para)
msgid "The <code>location</code> option is important to note. It does not copy the entire image into the Image service, but references an original location where the image can be found. Upon launching an instance of that image, the Image service accesses the image from the location specified."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:63(para)
msgid "The <code>copy-from</code> option copies the image from the location specified into the <code>/var/lib/glance/images</code> directory. The same thing is done when using the STDIN redirection with &lt;, as shown in the example."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:68(para)
msgid "Run the following command to view the properties of existing images:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:75(title)
msgid "Sharing Images Between Projects"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:77(para)
msgid "In a multitenant cloud environment, users sometimes want to share their personal images or snapshots with other projects.<indexterm class=\"singular\"><primary>projects</primary><secondary>sharing images between</secondary></indexterm><indexterm class=\"singular\"><primary>images</primary><secondary>sharing between projects</secondary></indexterm> This can be done on the command line with the <literal>glance</literal> tool by the owner of the image."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:90(para)
msgid "To share an image or snapshot with another project, do the following:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:95(para)
msgid "Obtain the UUID of the image:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:101(para)
msgid "Obtain the UUID of the project with which you want to share your image. Unfortunately, nonadmin users are unable to use the <literal>keystone</literal> command to do this. The easiest solution is to obtain the UUID either from an administrator of the cloud or from a user located in the project."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:109(para)
msgid "Once you have both pieces of information, run the <literal>glance</literal> command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:114(para) ./doc/openstack-ops/ch_ops_projects_users.xml:372(para) ./doc/openstack-ops/ch_ops_projects_users.xml:398(para) ./doc/openstack-ops/ch_ops_projects_users.xml:422(para) ./doc/openstack-ops/ch_ops_projects_users.xml:458(para) ./doc/openstack-ops/ch_ops_projects_users.xml:660(para) ./doc/openstack-ops/ch_ops_projects_users.xml:687(para)
msgid "For example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:119(para)
msgid "Project 771ed149ef7e4b2b88665cc1c98f77ca will now have access to image 733d1c44-a2ea-414b-aca7-69decf20d810."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:126(title)
msgid "Deleting Images"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:128(para)
msgid "To delete an image,<indexterm class=\"singular\"><primary>images</primary><secondary>deleting</secondary></indexterm> just execute:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:137(para)
msgid "Deleting an image does not affect instances or snapshots that were based on the image."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:143(title)
msgid "Other CLI Options"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:145(para)
msgid "A full set of options can be found using:<indexterm class=\"singular\"><primary>images</primary><secondary>CLI options for</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:154(para)
msgid "or the <link href=\"http://docs.openstack.org/cli-reference/content/glanceclient_commands.html\">Command-Line Interface Reference </link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:159(title)
msgid "The Image service and the Database"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:161(para)
msgid "The only thing that the Image service does not store in a database is the image itself. The Image service database has two main tables:<indexterm class=\"singular\"><primary>databases</primary><secondary>Image service</secondary></indexterm><indexterm class=\"singular\"><primary>Image service</primary><secondary>database tables</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:175(literal)
msgid "images"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:179(literal)
msgid "image_properties"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:183(para)
msgid "Working directly with the database and SQL queries can provide you with custom lists and reports of images. Technically, you can update properties about images through the database, although this is not generally recommended."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:190(title)
msgid "Example Image service Database Queries"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:192(para)
msgid "One interesting example is modifying the table of images and the owner of that image. This can be easily done if you simply display the unique ID of the owner. <indexterm class=\"singular\"><primary>Image service</primary><secondary>database queries</secondary></indexterm>This example goes one step further and displays the readable name of the owner:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:206(para)
msgid "Another example is displaying all properties for a certain image:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:215(title)
msgid "Flavors"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:217(para)
msgid "Virtual hardware templates are called \"flavors\" in OpenStack, defining sizes for RAM, disk, number of cores, and so on. The default install provides five flavors."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:221(para)
msgid "These are configurable by admin users (the rights may also be delegated to other users by redefining the access controls for <code>compute_extension:flavormanage</code> in <code>/etc/nova/policy.json</code> on the <code>nova-api</code> server). To get the list of available flavors on your system, run:<indexterm class=\"singular\"><primary>DAC (discretionary access control)</primary></indexterm><indexterm class=\"singular\"><primary>flavor</primary></indexterm><indexterm class=\"singular\"><primary>user training</primary><secondary>flavors</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:247(para)
msgid "The <code>nova flavor-create</code> command allows authorized users to create new flavors. Additional flavor manipulation commands can be shown with the command: <placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:251(para)
msgid "Flavors define a number of parameters, resulting in the user having a choice of what type of virtual machine to run—just like they would have if they were purchasing a physical server. <xref linkend=\"table-flavor-params\"/> lists the elements that can be set. Note in particular <phrase role=\"keep-together\"><literal>extra_specs</literal>,</phrase> which can be used to define free-form characteristics, giving a lot of flexibility beyond just the size of RAM, CPU, and Disk.<indexterm class=\"singular\"><primary>base image</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:263(caption)
msgid "Flavor parameters"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:271(emphasis)
msgid "Column"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:279(para)
msgid "ID"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:281(para)
msgid "A unique numeric ID."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:285(para) ./doc/openstack-ops/ch_ops_user_facing.xml:1187(th) ./doc/openstack-ops/ch_arch_scaling.xml:78(th)
msgid "Name"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:287(para)
msgid "A descriptive name, such as xx.size_name, is conventional but not required, though some third-party tools may rely on it."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:293(para)
msgid "Memory_MB"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:295(para)
msgid "Virtual machine memory in megabytes."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:299(para) ./doc/openstack-ops/ch_arch_scaling.xml:84(th)
msgid "Disk"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:301(para)
msgid "Virtual root disk size in gigabytes. This is an ephemeral disk the base image is copied into. You don't use it when you boot from a persistent volume. The \"0\" size is a special case that uses the native base image size as the size of the ephemeral root volume."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:309(para) ./doc/openstack-ops/ch_arch_scaling.xml:86(th)
msgid "Ephemeral"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:311(para)
msgid "Specifies the size of a secondary ephemeral data disk. This is an empty, unformatted disk and exists only for the life of the instance."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:317(para)
msgid "Swap"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:319(para)
msgid "Optional swap space allocation for the instance."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:324(para) ./doc/openstack-ops/ch_ops_projects_users.xml:347(para)
msgid "VCPUs"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:326(para)
msgid "Number of virtual CPUs presented to the instance."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:331(para)
msgid "RXTX_Factor"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:333(para)
msgid "Optional property that allows created servers to have a different bandwidth<indexterm class=\"singular\"><primary>bandwidth</primary><secondary>capping</secondary></indexterm> cap from that defined in the network they are attached to. This factor is multiplied by the rxtx_base property of the network. Default value is 1.0 (that is, the same as the attached network)."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:345(para)
msgid "Is_Public"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:347(para)
msgid "Boolean value that indicates whether the flavor is available to all users or private. Private flavors do not get the current tenant assigned to them. Defaults to <literal>True</literal>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:354(para)
msgid "extra_specs"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:356(para)
msgid "Additional optional restrictions on which compute nodes the flavor can run on. This is implemented as key-value pairs that must match against the corresponding key-value pairs on compute nodes. Can be used to implement things like special resources (such as flavors that can run only on compute nodes with GPU hardware)."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:367(title)
msgid "Private Flavors"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:369(para)
msgid "A user might need a custom flavor that is uniquely tuned for a project she is working on. For example, the user might require 128 GB of memory. If you create a new flavor as described above, the user would have access to the custom flavor, but so would all other tenants in your cloud. Sometimes this sharing isn't desirable. In this scenario, allowing all users to have access to a flavor with 128 GB of memory might cause your cloud to reach full capacity very quickly. To prevent this, you can restrict access to the custom flavor using the <literal>nova</literal> command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:381(para)
msgid "To view a flavor's access list, do the following:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:386(title)
msgid "Best Practices"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:388(para)
msgid "Once access to a flavor has been restricted, no other projects besides the ones granted explicit access will be able to see the flavor. This includes the admin project. Make sure to add the admin project in addition to the original project."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:393(para)
msgid "It's also helpful to allocate a specific numeric range for custom and private flavors. On UNIX-based systems, nonsystem accounts usually have a UID starting at 500. A similar approach can be taken with custom flavors. This helps you easily identify which flavors are custom, private, and public for the entire cloud."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:402(title)
msgid "How Do I Modify an Existing Flavor?"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:404(para)
msgid "The OpenStack dashboard simulates the ability to modify a flavor by deleting an existing flavor and creating a new one with the same name."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:413(title)
msgid "Security Groups"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:415(para)
msgid "A common new-user issue with OpenStack is failing to set an appropriate security group when launching an instance. As a result, the user is unable to contact the instance on the network.<indexterm class=\"singular\"><primary>security groups</primary></indexterm><indexterm class=\"singular\"><primary>user training</primary><secondary>security groups</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:426(para)
msgid "Security groups are sets of IP filter rules that are applied to an instance's networking. They are project specific, and project members can edit the default rules for their group and add new rules sets. All projects have a \"default\" security group, which is applied to instances that have no other security group defined. Unless changed, this security group denies all incoming traffic."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:434(title)
msgid "General Security Groups Configuration"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:436(para)
msgid "The <code>nova.conf</code> option <code>allow_same_net_traffic</code> (which defaults to <literal>true</literal>) globally controls whether the rules apply to hosts that share a network. When set to <literal>true</literal>, hosts on the same subnet are not filtered and are allowed to pass all types of traffic between them. On a flat network, this allows all instances from all projects unfiltered communication. With VLAN networking, this allows access between instances within the same project. If <code>allow_same_net_traffic</code> is set to <literal>false</literal>, security groups are enforced for all connections. In this case, it is possible for projects to simulate <code>allow_same_net_traffic</code> by configuring their default security group to allow all traffic from their subnet."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:451(para)
msgid "As noted in the previous chapter, the number of rules per security group is controlled by the <code>quota_security_group_rules</code>, and the number of allowed security groups per project is controlled by the <code>quota_security_groups</code> quota."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:460(title)
msgid "End-User Configuration of Security Groups"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:462(para)
msgid "Security groups for the current project can be found on the OpenStack dashboard under <guilabel>Access &amp; Security</guilabel>. To see details of an existing group, select the <guilabel>edit</guilabel> action for that security group. Obviously, modifying existing groups can be done from this <guilabel>edit</guilabel> interface. There is a <guibutton>Create Security Group</guibutton> button on the main <guilabel>Access &amp; Security</guilabel> page for creating new groups. We discuss the terms used in these fields when we explain the command-line equivalents."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:472(para)
msgid "From the command line, you can get a list of security groups for the project you're acting in using the <literal>nova</literal> command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:486(para)
msgid "To view the details of the \"open\" security group:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:497(para)
msgid "These rules are all \"allow\" type rules, as the default is deny. The first column is the IP protocol (one of icmp, tcp, or udp), and the second and third columns specify the affected port range. The fourth column specifies the IP range in CIDR format. This example shows the full port range for all protocols allowed from all IPs."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:503(para)
msgid "When adding a new security group, you should pick a descriptive but brief name. This name shows up in brief descriptions of the instances that use it where the longer description field often does not. Seeing that an instance is using security group <literal>http</literal> is much easier to understand than <literal>bobs_group</literal> or <literal>secgrp1</literal>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:510(para)
msgid "As an example, let's create a security group that allows web traffic anywhere on the Internet. We'll call this group <literal>global_http</literal>, which is clear and reasonably concise, encapsulating what is allowed and from where. From the command line, do:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:524(para)
msgid "This creates the empty security group. To make it do what we want, we need to add some rules:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:535(para)
msgid "Note that the arguments are positional, and the <literal>from-port</literal> and <literal>to-port</literal> arguments specify the allowed local port range connections. These arguments are not indicating source and destination ports of the connection. More complex rule sets can be built up through multiple invocations of <literal>nova secgroup-add-rule</literal>. For example, if you want to pass both http and https traffic, do this:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:550(para)
msgid "Despite only outputting the newly added rule, this operation is additive:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:561(para)
msgid "The inverse operation is called <literal>secgroup-delete-rule</literal>, using the same format. Whole security groups can be removed with <literal>secgroup-delete</literal>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:566(para)
msgid "To create security group rules for a cluster of instances, you want to use <phrase role=\"keep-together\">SourceGroups</phrase>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:569(para)
msgid "SourceGroups are a special dynamic way of defining the CIDR of allowed sources. The user specifies a SourceGroup (security group name) and then all the users' other instances using the specified SourceGroup are selected dynamically. This dynamic selection alleviates the need for individual rules to allow each new member of the <phrase role=\"keep-together\">cluster</phrase>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:576(para)
msgid "The code is structured like this: <code>nova secgroup-add-group-rule &lt;secgroup&gt; &lt;source-group&gt; &lt;ip-proto&gt; &lt;from-port&gt; &lt;to-port&gt;</code>. An example usage is shown here:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:583(para)
msgid "The \"cluster\" rule allows SSH access from any other instance that uses the <literal>global-http</literal> group."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:591(title) ./doc/openstack-ops/ch_arch_storage.xml:178(title) ./doc/openstack-ops/ch_ops_backup_recovery.xml:196(title)
msgid "Block Storage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:593(para)
msgid "OpenStack volumes are persistent block-storage devices that may be attached and detached from instances, but they can be attached to only one instance at a time. Similar to an external hard drive, they do not provide shared storage in the way a network file system or object store does. It is left to the operating system in the instance to put a file system on the block device and mount it, or not.<indexterm class=\"singular\"><primary>block storage</primary></indexterm><indexterm class=\"singular\"><primary>storage</primary><secondary>block storage</secondary></indexterm><indexterm class=\"singular\"><primary>user training</primary><secondary>block storage</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:610(para)
msgid "As with other removable disk technology, it is important that the operating system is not trying to make use of the disk before removing it. On Linux instances, this typically involves unmounting any file systems mounted from the volume. The OpenStack volume service cannot tell whether it is safe to remove volumes from an instance, so it does what it is told. If a user tells the volume service to detach a volume from an instance while it is being written to, you can expect some level of file system corruption as well as faults from whatever process within the instance was using the device."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:620(para)
msgid "There is nothing OpenStack-specific in being aware of the steps needed to access block devices from within the instance operating system, potentially formatting them for first use and being cautious when removing them. What is specific is how to create new volumes and attach and detach them from instances. These operations can all be done from the <guilabel>Volumes</guilabel> page of the dashboard or by using the <literal>cinder</literal> command-line client."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:628(para)
msgid "To add new volumes, you need only a name and a volume size in gigabytes. Either put these into the <guilabel>create volume</guilabel> web form or use the command line:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:634(para)
msgid "This creates a 10 GB volume named <literal>test-volume</literal>. To list existing volumes and the instances they are connected to, if any:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:645(para)
msgid "OpenStack Block Storage also allows for creating snapshots of volumes. Remember that this is a block-level snapshot that is crash consistent, so it is best if the volume is not connected to an instance when the snapshot is taken and second best if the volume is not in use on the instance it is attached to. If the volume is under heavy use, the snapshot may have an inconsistent file system. In fact, by default, the volume service does not take a snapshot of a volume that is attached to an image, though it can be forced to. To take a volume snapshot, either select <guilabel>Create Snapshot</guilabel> from the actions column next to the volume name on the dashboard volume page, or run this from the command line:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:671(para)
msgid "For more information about updating Block Storage volumes (for example, resizing or transferring), see the <link href=\"http://docs.openstack.org/user-guide/\">OpenStack End User Guide</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:675(title)
msgid "Block Storage Creation Failures"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:677(para)
msgid "If a user tries to create a volume and the volume immediately goes into an error state, the best way to troubleshoot is to grep the cinder log files for the volume's UUID. First try the log files on the cloud controller, and then try the storage node where the volume was attempted to be created:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:690(title) ./doc/openstack-ops/ch_ops_projects_users.xml:296(para) ./doc/openstack-ops/ch_ops_maintenance.xml:267(title)
msgid "Instances"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:692(para)
msgid "Instances are the running virtual machines within an OpenStack cloud. This section deals with how to work with them and their underlying images, their network properties, and how they are represented in the database.<indexterm class=\"singular\"><primary>user training</primary><secondary>instances</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:702(title)
msgid "Starting Instances"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:704(para)
msgid "To launch an instance, you need to select an image, a flavor, and a name. The name needn't be unique, but your life will be simpler if it is because many tools will use the name in place of the UUID so long as the name is unique. You can start an instance from the dashboard from the <guibutton>Launch Instance</guibutton> button on the <guilabel>Instances</guilabel> page or by selecting the <guilabel>Launch</guilabel> action next to an image or snapshot on the <guilabel>Images &amp; Snapshots</guilabel> page.<indexterm class=\"singular\"><primary>instances</primary><secondary>starting</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:718(para)
msgid "On the command line, do this:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:722(para)
msgid "There are a number of optional items that can be specified. You should read the rest of this section before trying to start an instance, but this is the base command that later details are layered upon."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:726(para)
msgid "To delete instances from the dashboard, select the <guilabel>Terminate instance</guilabel> action next to the instance on the <guilabel>Instances</guilabel> page. From the command line, do this:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:733(para)
msgid "It is important to note that powering off an instance does not terminate it in the OpenStack sense."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:738(title)
msgid "Instance Boot Failures"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:740(para)
msgid "If an instance fails to start and immediately moves to an error state, there are a few different ways to track down what has gone wrong. Some of these can be done with normal user access, while others require access to your log server or compute nodes.<indexterm class=\"singular\"><primary>instances</primary><secondary>boot failures</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:749(para)
msgid "The simplest reasons for nodes to fail to launch are quota violations or the scheduler being unable to find a suitable compute node on which to run the instance. In these cases, the error is apparent when you run a <code>nova show</code> on the faulted instance:<indexterm class=\"singular\"><primary>config drive</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:787(para)
msgid "In this case, looking at the <literal>fault</literal> message shows <literal>NoValidHost</literal>, indicating that the scheduler was unable to match the instance requirements."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:791(para)
msgid "If <code>nova show</code> does not sufficiently explain the failure, searching for the instance UUID in the <code>nova-compute.log</code> on the compute node it was scheduled on or the <code>nova-scheduler.log</code> on your scheduler hosts is a good place to start looking for lower-level problems."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:797(para)
msgid "Using <code>nova show</code> as an admin user will show the compute node the instance was scheduled on as <code>hostId</code>. If the instance failed during scheduling, this field is blank."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:803(title)
msgid "Using Instance-Specific Data"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:805(para)
msgid "There are two main types of instance-specific data: metadata and user data.<indexterm class=\"singular\"><primary>metadata</primary><secondary>instance metadata</secondary></indexterm><indexterm class=\"singular\"><primary>instances</primary><secondary>instance-specific data</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:817(title)
msgid "Instance metadata"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:819(para)
msgid "For Compute, instance metadata is a collection of key-value pairs associated with an instance. Compute reads and writes to these key-value pairs any time during the instance lifetime, from inside and outside the instance, when the end user uses the Compute API to do so. However, you cannot query the instance-associated key-value pairs with the metadata service that is compatible with the Amazon EC2 metadata service."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:827(para)
msgid "For an example of instance metadata, users can generate and register SSH keys using the <literal>nova</literal> command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:832(para)
msgid "This creates a key named <placeholder-1/>, which you can associate with instances. The file <filename>mykey.pem</filename> is the private key, which should be saved to a secure location because it allows root access to instances the <placeholder-2/> key is associated with."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:838(para)
msgid "Use this command to register an existing key with OpenStack:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:844(para)
msgid "You must have the matching private key to access instances associated with this key."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:848(para)
msgid "To associate a key with an instance on boot, add <code>--key_name mykey</code> to your command line. For example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:854(para)
msgid "When booting a server, you can also add arbitrary metadata so that you can more easily identify it among other running instances. Use the <code>--meta</code> option with a key-value pair, where you can make up the string for both the key and the value. For example, you could add a description and also the creator of the server:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:863(para)
msgid "When viewing the server information, you can see the metadata included on the <phrase role=\"keep-together\">metadata</phrase> line:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:899(title)
msgid "Instance user data"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:901(para)
msgid "The <code>user-data</code> key is a special key in the metadata service that holds a file that cloud-aware applications within the guest instance can access. For example, <link href=\"https://help.ubuntu.com/community/CloudInit\" title=\"OpenStack Image service\">cloudinit</link> is an open source package from Ubuntu, but available in most distributions, that handles early initialization of a cloud instance that makes use of this user data.<indexterm class=\"singular\"><primary>user data</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:912(para)
msgid "This user data can be put in a file on your local system and then passed in at instance creation with the flag <code>--user-data &lt;user-data-file&gt;</code>. For example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:918(para)
msgid "To understand the difference between user data and metadata, realize that user data is created before an instance is started. User data is accessible from within the instance when it is running. User data can be used to store configuration, a script, or anything the tenant wants."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:926(title)
msgid "File injection"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:928(para)
msgid "Arbitrary local files can also be placed into the instance file system at creation time by using the <code>--file &lt;dst-path=src-path&gt;</code> option. You may store up to five files.<indexterm class=\"singular\"><primary>file injection</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:935(para)
msgid "For example, let's say you have a special <filename>authorized_keys</filename> file named special_authorized_keysfile that for some reason you want to put on the instance instead of using the regular SSH key injection. In this case, you can use the following command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:948(title)
msgid "Associating Security Groups"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:950(para)
msgid "Security groups, as discussed earlier, are typically required to allow network traffic to an instance, unless the default security group for a project has been modified to be more permissive.<indexterm class=\"singular\"><primary>security groups</primary></indexterm><indexterm class=\"singular\"><primary>user training</primary><secondary>security groups</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:961(para)
msgid "Adding security groups is typically done on instance boot. When launching from the dashboard, you do this on the <guilabel>Access &amp; Security</guilabel> tab of the <guilabel>Launch Instance</guilabel> dialog. When launching from the command line, append <code>--security-groups</code> with a comma-separated list of security groups."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:968(para)
msgid "It is also possible to add and remove security groups when an instance is running. Currently this is only available through the command-line tools. Here is an example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:978(title) ./doc/openstack-ops/ch_ops_projects_users.xml:259(para)
msgid "Floating IPs"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:980(para)
msgid "Where floating IPs are configured in a deployment, each project will have a limited number of floating IPs controlled by a quota. However, these need to be allocated to the project from the central pool prior to their use—usually by the administrator of the project. To allocate a floating IP to a project, use the <guibutton>Allocate IP to Project</guibutton> button on the <guilabel>Access &amp; Security</guilabel> page of the dashboard. The command line can also be used:<indexterm class=\"singular\"><primary>address pool</primary></indexterm><indexterm class=\"singular\"><primary>IP addresses</primary><secondary>floating</secondary></indexterm><indexterm class=\"singular\"><primary>user training</primary><secondary>floating IPs</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1001(para)
msgid "Once allocated, a floating IP can be assigned to running instances from the dashboard either by selecting <guibutton>Associate Floating IP</guibutton> from the actions drop-down next to the IP on the <guilabel>Access &amp; Security</guilabel> page or by making this selection next to the instance you want to associate it with on the <guilabel>Instances</guilabel> page. The inverse action, <guibutton>Dissociate Floating IP</guibutton>, is available only from the <guilabel>Access &amp; Security</guilabel> page and not from the <guilabel>Instances</guilabel> page."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1011(para)
msgid "To associate or disassociate a floating IP with a server from the command line, use the following commands:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1020(title)
msgid "Attaching Block Storage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1022(para)
msgid "You can attach block storage to instances from the dashboard on the <guilabel>Volumes</guilabel> page. Click the <guibutton>Edit Attachments</guibutton> action next to the volume you want to attach.<indexterm class=\"singular\"><primary>storage</primary><secondary>block storage</secondary></indexterm><indexterm class=\"singular\"><primary>block storage</primary></indexterm><indexterm class=\"singular\"><primary>user training</primary><secondary>block storage</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1037(para)
msgid "To perform this action from command line, run the following command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1042(para)
msgid "You can also specify block device<indexterm class=\"singular\"><primary>block device</primary></indexterm> mapping at instance boot time through the nova command-line client with this option set:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1050(code)
msgid "&lt;dev-name&gt;=&lt;id&gt;:&lt;type&gt;:&lt;size(GB)&gt;:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1049(phrase)
msgid "The block device mapping format is <placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1051(code)
msgid "&lt;delete-on-terminate&gt;"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1051(phrase)
msgid "<placeholder-1/>, where:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1056(term)
msgid "dev-name"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1059(para)
msgid "A device name where the volume is attached in the system at <code>/dev/<replaceable>dev_name</replaceable></code>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1065(term)
msgid "id"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1068(para)
msgid "The ID of the volume to boot from, as shown in the output of <literal>nova volume-list</literal>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1074(term)
msgid "type"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1077(para)
msgid "Either <literal>snap</literal>, which means that the volume was created from a snapshot, or anything other than <literal>snap</literal> (a blank string is valid). In the preceding example, the volume was not created from a snapshot, so we leave this field blank in our following example."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1086(term)
msgid "size (GB)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1089(para)
msgid "The size of the volume in gigabytes. It is safe to leave this blank and have the Compute Service infer the size."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1095(term)
msgid "delete-on-terminate"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1098(para)
msgid "A boolean to indicate whether the volume should be deleted when the instance is terminated. True can be specified as <literal>True</literal> or <literal>1</literal>. False can be specified as <literal>False</literal> or <literal>0</literal>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1107(para)
msgid "The following command will boot a new instance and attach a volume at the same time. The volume of ID 13 will be attached as <code>/dev/vdc</code>. It is not a snapshot, does not specify a size, and will not be deleted when the instance is terminated:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1116(para)
msgid "If you have previously prepared block storage with a bootable file system image, it is even possible to boot from persistent block storage. The following command boots an image from the specified volume. It is similar to the previous command, but the image is omitted and the volume is now attached as <code>/dev/vda</code>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1125(para)
msgid "Read more detailed instructions for launching an instance from a bootable volume in the <link href=\"http://docs.openstack.org/user-guide/cli_nova_launch_instance_from_volume.html\">OpenStack End User Guide</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1130(para)
msgid "To boot normally from an image and attach block storage, map to a device other than vda. You can find instructions for launching an instance and attaching a volume to the instance and for copying the image to the attached volume in the <link href=\"http://docs.openstack.org/user-guide/dashboard_launch_instances.html\">OpenStack End User Guide</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1141(title)
msgid "Taking Snapshots"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1143(para)
msgid "The OpenStack snapshot mechanism allows you to create new images from running instances. This is very convenient for upgrading base images or for taking a published image and customizing it for local use. To snapshot a running instance to an image using the CLI, do this:<indexterm class=\"singular\"><primary>base image</primary></indexterm><indexterm class=\"singular\"><primary>snapshot</primary></indexterm><indexterm class=\"singular\"><primary>user training</primary><secondary>snapshots</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1159(para)
msgid "The dashboard interface for snapshots can be confusing because the <guilabel>Images &amp; Snapshots</guilabel> page splits content up into several areas:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1169(para)
msgid "Instance snapshots"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1173(para)
msgid "Volume snapshots"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1177(para)
msgid "However, an instance snapshot <emphasis>is</emphasis> an image. The only difference between an image that you upload directly to the Image Service and an image that you create by snapshot is that an image created by snapshot has additional properties in the glance database. These properties are found in the <literal>image_properties</literal> table and include:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1189(th)
msgid "Value"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1195(literal)
msgid "image_type"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1197(para) ./doc/openstack-ops/ch_ops_user_facing.xml:1216(para)
msgid "snapshot"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1201(literal)
msgid "instance_uuid"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1203(para)
msgid "&lt;uuid of instance that was snapshotted&gt;"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1207(literal)
msgid "base_image_ref"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1209(para)
msgid "&lt;uuid of original image of instance that was snapshotted&gt;"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1214(literal)
msgid "image_location"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1222(title)
msgid "Live Snapshots"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1224(para)
msgid "Live snapshots is a feature that allows users to snapshot the running virtual machines without pausing them. These snapshots are simply disk-only snapshots. Snapshotting an instance can now be performed with no downtime (assuming QEMU 1.3+ and libvirt 1.0+ are used).<indexterm class=\"singular\"><primary>live snapshots</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1233(title)
msgid "Ensuring Snapshots of Linux Guests Are Consistent"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1235(para)
msgid "The following section is from Sébastien Han's <link href=\"http://www.sebastien-han.fr/blog/2012/12/10/openstack-perform-consistent-snapshots/\" title=\"OpenStack Image service\">“OpenStack: Perform Consistent Snapshots” blog entry</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1240(para)
msgid "A snapshot captures the state of the file system, but not the state of the memory. Therefore, to ensure your snapshot contains the data that you want, before your snapshot you need to ensure that:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1247(para)
msgid "Running programs have written their contents to disk"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1251(para)
msgid "The file system does not have any \"dirty\" buffers: where programs have issued the command to write to disk, but the operating system has not yet done the write"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1257(para)
msgid "To ensure that important services have written their contents to disk (such as databases), we recommend that you read the documentation for those applications to determine what commands to issue to have them sync their contents to disk. If you are unsure how to do this, the safest approach is to simply stop these running services normally."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1264(para)
msgid "To deal with the \"dirty\" buffer issue, we recommend using the sync command before snapshotting:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1269(para)
msgid "Running <code>sync</code> writes dirty buffers (buffered blocks that have been modified but not written yet to the disk block) to disk."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1273(para)
msgid "Just running <code>sync</code> is not enough to ensure that the file system is consistent. We recommend that you use the <code>fsfreeze</code> tool, which halts new access to the file system, and create a stable image on disk that is suitable for snapshotting. The <code>fsfreeze</code> tool supports several file systems, including ext3, ext4, and XFS. If your virtual machine instance is running on Ubuntu, install the util-linux package to get <literal>fsfreeze</literal>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1283(para)
msgid "In the very common case where the underlying snapshot is done via LVM, the filesystem freeze is automatically handled by LVM."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1290(para)
msgid "If your operating system doesn't have a version of <literal>fsfreeze</literal> available, you can use <literal>xfs_freeze</literal> instead, which is available on Ubuntu in the xfsprogs package. Despite the \"xfs\" in the name, xfs_freeze also works on ext3 and ext4 if you are using a Linux kernel version 2.6.29 or greater, since it works at the virtual file system (VFS) level starting at 2.6.29. The xfs_freeze version supports the same command-line arguments as <literal>fsfreeze</literal>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1299(para)
msgid "Consider the example where you want to take a snapshot of a persistent block storage volume, detected by the guest operating system as <literal>/dev/vdb</literal> and mounted on <literal>/mnt</literal>. The fsfreeze command accepts two arguments:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1307(term)
msgid "-f"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1310(para)
msgid "Freeze the system"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1315(term)
msgid "-u"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1318(para)
msgid "Thaw (unfreeze) the system"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1323(para)
msgid "To freeze the volume in preparation for snapshotting, you would do the following, as root, inside the instance:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1328(para)
msgid "You <emphasis>must mount the file system</emphasis> before you run the <literal>fsfreeze</literal> command."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1331(para)
msgid "When the <literal>fsfreeze -f</literal> command is issued, all ongoing transactions in the file system are allowed to complete, new write system calls are halted, and other calls that modify the file system are halted. Most importantly, all dirty data, metadata, and log information are written to disk."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1337(para)
msgid "Once the volume has been frozen, do not attempt to read from or write to the volume, as these operations hang. The operating system stops every I/O operation and any I/O attempts are delayed until the file system has been unfrozen."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1342(para)
msgid "Once you have issued the <literal>fsfreeze</literal> command, it is safe to perform the snapshot. For example, if your instance was named <literal>mon-instance</literal> and you wanted to snapshot it to an image named <literal>mon-snapshot</literal>, you could now run the following:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1350(para)
msgid "When the snapshot is done, you can thaw the file system with the following command, as root, inside of the instance:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1355(para)
msgid "If you want to back up the root file system, you can't simply run the preceding command because it will freeze the prompt. Instead, run the following one-liner, as root, inside the instance:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1361(para)
msgid "After this command it is common practice to call <placeholder-1/> from your workstation, and once done press enter in your instance shell to unfreeze it. Obviously you could automate this, but at least it will let you properly synchronize."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1367(title)
msgid "Ensuring Snapshots of Windows Guests Are Consistent"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1369(para)
msgid "Obtaining consistent snapshots of Windows VMs is conceptually similar to obtaining consistent snapshots of Linux VMs, although it requires additional utilities to coordinate with a Windows-only subsystem designed to facilitate consistent backups."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1374(para)
msgid "Windows XP and later releases include a Volume Shadow Copy Service (VSS) which provides a framework so that compliant applications can be consistently backed up on a live filesystem. To use this framework, a VSS requestor is run that signals to the VSS service that a consistent backup is needed. The VSS service notifies compliant applications (called VSS writers) to quiesce their data activity. The VSS service then tells the copy provider to create a snapshot. Once the snapshot has been made, the VSS service unfreezes VSS writers and normal I/O activity resumes."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1384(para)
msgid "QEMU provides a guest agent that can be run in guests running on KVM hypervisors. This guest agent, on Windows VMs, coordinates with the Windows VSS service to facilitate a workflow which ensures consistent snapshots. This feature requires at least QEMU 1.7. The relevant guest agent commands are:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1392(term)
msgid "guest-file-flush"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1394(para)
msgid "Write out \"dirty\" buffers to disk, similar to the Linux <literal>sync</literal> operation."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1400(term)
msgid "guest-fsfreeze"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1402(para)
msgid "Suspend I/O to the disks, similar to the Linux <literal>fsfreeze -f</literal> operation."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1408(term)
msgid "guest-fsfreeze-thaw"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1410(para)
msgid "Resume I/O to the disks, similar to the Linux <literal>fsfreeze -u</literal> operation."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1416(para)
msgid "To obtain snapshots of a Windows VM these commands can be scripted in sequence: flush the filesystems, freeze the filesystems, snapshot the filesystems, then unfreeze the filesystems. As with scripting similar workflows against Linux VMs, care must be used when writing such a script to ensure error handling is thorough and filesystems will not be left in a frozen state."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1429(title)
msgid "Instances in the Database"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1431(para)
msgid "While instance information is stored in a number of database tables, the table you most likely need to look at in relation to user instances is the instances table.<indexterm class=\"singular\"><primary>instances</primary><secondary>database information</secondary></indexterm><indexterm class=\"singular\"><primary>databases</primary><secondary>instance information in</secondary></indexterm><indexterm class=\"singular\"><primary>user training</primary><secondary>instances</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1447(para)
msgid "The instances table carries most of the information related to both running and deleted instances. It has a bewildering array of fields; for an exhaustive list, look at the database. These are the most useful fields for operators looking to form queries:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1454(para)
msgid "The <literal>deleted</literal> field is set to <literal>1</literal> if the instance has been deleted and <literal>NULL</literal> if it has not been deleted. This field is important for excluding deleted instances from your queries."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1461(para)
msgid "The <literal>uuid</literal> field is the UUID of the instance and is used throughout other tables in the database as a foreign key. This ID is also reported in logs, the dashboard, and command-line tools to uniquely identify an instance."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1468(para)
msgid "A collection of foreign keys are available to find relations to the instance. The most useful of these—<literal>user_id</literal> and <literal>project_id</literal>—are the UUIDs of the user who launched the instance and the project it was launched in."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1475(para)
msgid "The <literal>host</literal> field tells which compute node is hosting the instance."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1480(para)
msgid "The <literal>hostname</literal> field holds the name of the instance when it is launched. The display-name is initially the same as hostname but can be reset using the nova rename command."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1486(para)
msgid "A number of time-related fields are useful for tracking when state changes happened on an instance:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1491(literal)
msgid "created_at"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1495(literal)
msgid "updated_at"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1499(literal)
msgid "deleted_at"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1503(literal)
msgid "scheduled_at"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1507(literal)
msgid "launched_at"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1511(literal)
msgid "terminated_at"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1517(title)
msgid "Good Luck!"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1519(para)
msgid "This section was intended as a brief introduction to some of the most useful of many OpenStack commands. For an exhaustive list, please refer to the <link href=\"http://docs.openstack.org/user-guide-admin/\">Admin User Guide</link>, and for additional hints and tips, see the <link href=\"http://docs.openstack.org/admin-guide-cloud/content/\">Cloud Admin Guide</link>. We hope your users remain happy and recognize your hard work! (For more hard work, turn the page to the next chapter, where we discuss the system-facing operations: maintenance, failures and debugging.)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:10(title) ./doc/openstack-ops/ch_ops_upgrades.xml:166(title)
msgid "Upgrades"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:12(para)
msgid "With the exception of Object Storage, upgrading from one version of OpenStack to another can take a great deal of effort. Until the situation improves, this chapter provides some guidance on the operational aspects that you should consider for performing an upgrade based on detailed steps for a basic architecture."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:20(title)
msgid "Pre-Upgrade Testing Environment"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:22(para)
msgid "The most important step is the pre-upgrade testing. If you are upgrading immediately after release of a new version, undiscovered bugs might hinder your progress. Some deployers prefer to wait until the first point release is announced. However, if you have a significant deployment, you might follow the development and testing of the release to ensure that bugs for your use cases are fixed.<indexterm class=\"singular\"><primary>upgrading</primary><secondary>pre-upgrade testing</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:33(para)
msgid "Even if you have what seems to be a near-identical architecture as the one described in this guide, each OpenStack cloud is different. As a result, you must still test upgrades between versions in your environment. For this, you need an approximate clone of your environment."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:39(para)
msgid "However, that is not to say that it needs to be the same size or use identical hardware as the production environment—few of us have that luxury. It is important to consider the hardware and scale of the cloud that you are upgrading, but these tips can help you avoid that incredible cost:<indexterm class=\"singular\"><primary>upgrading</primary><secondary>controlling cost of</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:51(term)
msgid "Use your own cloud"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:54(para)
msgid "The simplest place to start testing the next version of OpenStack is by setting up a new environment inside your own cloud. This might seem odd—especially the double virtualization used in running compute nodes—but it's a sure way to very quickly test your configuration."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:63(term)
msgid "Use a public cloud"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:66(para)
msgid "Especially because your own cloud is unlikely to have sufficient space to scale test to the level of the entire cloud, consider using a public cloud to test the scalability limits of your cloud controller configuration. Most public clouds bill by the hour, which means it can be inexpensive to perform even a test with many nodes.<indexterm class=\"singular\"><primary>cloud controllers</primary><secondary>scalability and</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:80(term)
msgid "Make another storage endpoint on the same system"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:83(para)
msgid "If you use an external storage plug-in or shared file system with your cloud, in many cases, you can test whether it works by creating a second share or endpoint. This action enables you to test the system before entrusting the new version onto your storage."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:92(term)
msgid "Watch the network"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:95(para)
msgid "Even at smaller-scale testing, look for excess network packets to determine whether something is going horribly wrong in inter-component communication."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:102(para)
msgid "To set up the test environment, you can use one of several methods:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:109(para)
msgid "Do a full manual install by using the <link href=\"http://docs.openstack.org/\"><citetitle>OpenStack Installation Guide</citetitle></link> for your platform. Review the final configuration files and installed packages."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:116(para)
msgid "Create a clone of your automated configuration infrastructure with changed package repository URLs."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:118(para)
msgid "Alter the configuration until it works."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:121(para)
msgid "Either approach is valid. Use the approach that matches your experience."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:123(para)
msgid "An upgrade pre-testing system is excellent for getting the configuration to work; however, it is important to note that the historical use of the system and differences in user interaction can affect the success of upgrades, too. We've seen experiences where database migrations encountered a bug (later fixed!) because of slight table differences between fresh installs and those that migrated from one version to another."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:130(para)
msgid "If possible, we highly recommended that you dump your production database tables and test the upgrade in your development environment using this data. As stated above, several MySQL bugs have been uncovered during database migrations that will only be hit on large real datasets. You do not want to find this out in the middle of a production outage."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:137(para)
msgid "Artificial scale testing can go only so far. After your cloud is upgraded, you must pay careful attention to the performance aspects of your cloud."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:143(title)
msgid "Preparing for a Rollback"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:145(para)
msgid "Like all major system upgrades, your upgrade could fail for one or more difficult-to-determine reasons. You should prepare for this situation by leaving the ability to roll back your environment to the previous release, including databases, configuration files, and packages. We provide an example process for rolling back your environment in <xref linkend=\"ops_upgrades-roll-back\"/>.<indexterm class=\"singular\"><primary>upgrading</primary><secondary>process overview</secondary></indexterm><indexterm class=\"singular\"><primary>rollbacks</primary><secondary>preparing for</secondary></indexterm><indexterm class=\"singular\"><primary>upgrading</primary><secondary>preparation for</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:168(para)
msgid "The upgrade process generally follows these steps:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:172(para)
msgid "Perform some \"cleaning\" of the environment prior to starting the upgrade process to ensure a consistent state. For example, instances not fully purged from the system after deletion might cause indeterminate behavior."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:179(para)
msgid "Read the release notes and documentation."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:183(para)
msgid "Find incompatibilities between your versions."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:187(para)
msgid "Develop an upgrade procedure and assess it thoroughly by using a test environment similar to your production environment."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:193(para)
msgid "Make a full database backup of your production data. As of Kilo, database downgrades are not supported, and the only method available to get back to a prior database version will be to restore from backup."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:200(para)
msgid "Run the upgrade procedure on the production environment."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:205(para)
msgid "You can perform an upgrade with operational instances, but this strategy can be dangerous. You might consider using live migration to temporarily relocate instances to other compute nodes while performing upgrades. However, you must ensure database consistency throughout the process; otherwise your environment might become unstable. Also, don't forget to provide sufficient notice to your users, including giving them plenty of time to perform their own backups."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:214(para)
msgid "The following order for service upgrades seems the most successful:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:219(para)
msgid "Upgrade the OpenStack Identity Service (keystone)."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:224(para)
msgid "Upgrade the OpenStack Image service (glance)."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:228(para)
msgid "Upgrade OpenStack Compute (nova), including networking components."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:233(para)
msgid "Upgrade OpenStack Block Storage (cinder)."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:237(para)
msgid "Upgrade the OpenStack dashboard."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:241(para)
msgid "The general upgrade process includes the following steps:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:246(para)
msgid "Create a backup of configuration files and databases."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:251(para)
msgid "Update the configuration files according to the release notes."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:256(para)
msgid "Upgrade the packages by using your distribution's package manager."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:261(para)
msgid "Stop services, update database schemas, and restart services."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:266(para)
msgid "Verify proper operation of your environment."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:272(title)
msgid "Upgrade Levels"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:273(para)
msgid "Upgrade levels are a feature added to OpenStack Compute in the Grizzly release to provide version locking on the RPC (Message Queue) communications between the various Compute services."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:277(para)
msgid "This functionality is an important piece of the puzzle when it comes to live upgrades and is conceptually similar to the existing API versioning that allows OpenStack services of different versions to communicate without issue, for example Grizzly Compute can still make Grizzly Identity API calls even if Identity is running Icehouse."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:283(para)
msgid "Without upgrade levels, an X+1 version Compute service can receive and understand X version RPC messages, but it can only send out X+1 version RPC messages. For example, if a <systemitem class=\"service\">nova-conductor</systemitem> process has been upgraded to Icehouse, then the conductor service will be able to understand messages from Havana <systemitem class=\"service\">nova-compute</systemitem> processes, but those compute services will not be able to understand messages sent by the conductor service."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:292(para)
msgid "During an upgrade, operators can add configuration options to <filename>nova.conf</filename> which lock the version of RPC messages and allow live upgrading of the services without interruption caused by version mismatch. The configuration options allow the specification of RPC version numbers if desired, but release name alias are also supported. For example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:302(para)
msgid "will keep the RPC version locked across the specified services to the RPC version used in Havana. As all instances of a particular service are upgraded to the newer version, the corresponding line can be removed from <filename>nova.conf</filename>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:306(para)
msgid "Using this functionality, ideally one would lock the RPC version to the OpenStack version being upgraded from on <systemitem class=\"service\">nova-compute</systemitem> nodes, to ensure that, for example Havana <systemitem class=\"service\">nova-compute</systemitem> processes will continue to work with Grizzly <systemitem class=\"service\">nova-conductor</systemitem> processes while the upgrade completes. Once the upgrade of <systemitem class=\"service\">nova-compute</systemitem> processes is complete, the operator can move onto upgrading <systemitem class=\"service\">nova-conductor</systemitem> and remove the version locking for <systemitem class=\"service\">nova-compute</systemitem> in <filename>nova.conf</filename>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:324(title)
msgid "How to Perform an Upgrade from Grizzly to Havana—Ubuntu"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:329(para)
msgid "For this section, we assume that you are starting with the architecture provided in the OpenStack <link href=\"http://docs.openstack.org/havana/install-guide/install/apt/content/\"><citetitle>OpenStack Installation Guide</citetitle></link> and upgrading to the same architecture for Havana. All nodes should run Ubuntu 12.04 LTS. This section primarily addresses upgrading core OpenStack services, such as the Identity Service (keystone), Image service (glance), Compute (nova) including networking, Block Storage (cinder), and the dashboard.<indexterm class=\"startofrange\" xml:id=\"UPubuntu\"><primary>upgrading</primary><secondary>Grizzly to Havana (Ubuntu)</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:344(title) ./doc/openstack-ops/ch_ops_upgrades.xml:715(title) ./doc/openstack-ops/ch_ops_upgrades.xml:1095(title) ./doc/openstack-ops/ch_ops_upgrades.xml:1580(title)
msgid "Impact on Users"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:346(para) ./doc/openstack-ops/ch_ops_upgrades.xml:717(para)
msgid "The upgrade process interrupts management of your environment, including the dashboard. If you properly prepare for this upgrade, tenant instances continue to operate normally."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:353(title) ./doc/openstack-ops/ch_ops_upgrades.xml:724(title) ./doc/openstack-ops/ch_ops_upgrades.xml:1104(title) ./doc/openstack-ops/ch_ops_upgrades.xml:1589(title)
msgid "Upgrade Considerations"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:355(para) ./doc/openstack-ops/ch_ops_upgrades.xml:726(para)
msgid "Always review the <link href=\"https://wiki.openstack.org/wiki/ReleaseNotes/Havana\">release notes</link> before performing an upgrade to learn about newly available features that you might want to enable and deprecated features that you should disable."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:363(title) ./doc/openstack-ops/ch_ops_upgrades.xml:734(title) ./doc/openstack-ops/ch_ops_upgrades.xml:1149(title) ./doc/openstack-ops/ch_ops_upgrades.xml:1637(title)
msgid "Perform a Backup"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:365(para)
msgid "Save the configuration files on all nodes, as shown here:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:374(para) ./doc/openstack-ops/ch_ops_upgrades.xml:744(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1160(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1648(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2132(para)
msgid "You can modify this example script on each node to handle different services."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:378(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1165(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1653(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2137(para)
msgid "Back up all databases on the controller:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:385(title) ./doc/openstack-ops/ch_ops_upgrades.xml:755(title) ./doc/openstack-ops/ch_ops_upgrades.xml:1180(title) ./doc/openstack-ops/ch_ops_upgrades.xml:1668(title)
msgid "Manage Repositories"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:387(para) ./doc/openstack-ops/ch_ops_upgrades.xml:757(para)
msgid "On all nodes, remove the repository for Grizzly packages and add the repository for Havana packages:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:394(para) ./doc/openstack-ops/ch_ops_upgrades.xml:766(para)
msgid "Make sure any automatic updates are disabled."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:399(title) ./doc/openstack-ops/ch_ops_upgrades.xml:777(title)
msgid "Update Configuration Files"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:401(para) ./doc/openstack-ops/ch_ops_upgrades.xml:779(para)
msgid "Update the glance configuration on the controller node for compatibility with <phrase role=\"keep-together\">Havana</phrase>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:405(para) ./doc/openstack-ops/ch_ops_upgrades.xml:783(para)
msgid "Add or modify the following keys in the <filename>/etc/glance/glance-api.conf</filename> and <filename>/etc/glance/glance-registry.conf</filename> files:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:420(para)
msgid "If currently present, remove the following key from the <literal>[filter:authtoken]</literal> section in the <filename>/etc/glance/glance-api-paste.ini</filename> and <filename>/etc/glance/glance-registry-paste.ini</filename> files:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:429(para) ./doc/openstack-ops/ch_ops_upgrades.xml:823(para)
msgid "Update the nova configuration on all nodes for compatibility with Havana."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:432(para) ./doc/openstack-ops/ch_ops_upgrades.xml:826(para)
msgid "Add the <literal>[database]</literal> section and associated key to the <filename>/etc/nova/nova.conf</filename> file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:439(para)
msgid "Remove defunct configuration from the <literal>[DEFAULT]</literal> section in the <filename>/etc/nova/nova.conf</filename> file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:446(para) ./doc/openstack-ops/ch_ops_upgrades.xml:838(para)
msgid "Add or modify the following keys in the <filename>/etc/nova/nova.conf</filename> file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:458(para)
msgid "On all compute nodes, increase the DHCP lease time (measured in seconds) in the <filename>/etc/nova/nova.conf</filename> file to enable currently active instances to continue leasing their IP addresses during the upgrade process:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:469(para) ./doc/openstack-ops/ch_ops_upgrades.xml:862(para)
msgid "Setting this value too high might cause more dynamic environments to run out of available IP addresses. Use an appropriate value for your environment."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:474(para)
msgid "You must restart dnsmasq and the networking component of Compute to enable the new DHCP lease time:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:480(para)
msgid "Update the Cinder configuration on the controller and storage nodes for compatibility with Havana."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:483(para) ./doc/openstack-ops/ch_ops_upgrades.xml:890(para)
msgid "Add or modify the following key in the <filename>/etc/cinder/cinder.conf</filename> file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:489(para) ./doc/openstack-ops/ch_ops_upgrades.xml:896(para)
msgid "Update the dashboard configuration on the controller node for compatibility with Havana."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:492(para)
msgid "The dashboard installation procedure and configuration file changed substantially between Grizzly and Havana. Particularly, if you are running Django 1.5 or later, you must ensure that <filename>/etc/openstack-dashboard/local_settings</filename> contains a correctly configured <placeholder-1/> key that contains a list of host names recognized by the dashboard."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:501(para) ./doc/openstack-ops/ch_ops_upgrades.xml:908(para)
msgid "If users access your dashboard by using <emphasis>http://dashboard.example.com</emphasis>, define <placeholder-1/>, as follows:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:507(para) ./doc/openstack-ops/ch_ops_upgrades.xml:914(para)
msgid "If users access your dashboard on the local system, define <placeholder-1/>, as follows:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:512(para) ./doc/openstack-ops/ch_ops_upgrades.xml:919(para)
msgid "If users access your dashboard by using an IP address in addition to a host name, define <placeholder-1/>, as follows:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:520(title) ./doc/openstack-ops/ch_ops_upgrades.xml:927(title)
msgid "Upgrade Packages on the Controller Node"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:522(para)
msgid "Upgrade packages on the controller node to Havana, as follows:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:529(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1206(para)
msgid "Depending on your specific configuration, performing a <code>dist-upgrade</code> might restart services supplemental to your OpenStack environment. For example, if you use Open-iSCSI for Block Storage volumes and the upgrade includes a new <code>open-scsi</code> package, the package manager restarts Open-iSCSI services, which might cause the volumes for your users to be disconnected."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:538(para)
msgid "The package manager prompts you to update various configuration files. Reject these changes. The package manager appends <filename>.dpkg-dist</filename> to the newer versions of existing configuration files. You should consider adopting conventions associated with the newer configuration files and merging them with your existing configuration files after completing the upgrade process."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:548(title) ./doc/openstack-ops/ch_ops_upgrades.xml:956(title)
msgid "Stop Services, Update Database Schemas, and Restart Services on the Controller Node"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:551(para) ./doc/openstack-ops/ch_ops_upgrades.xml:959(para)
msgid "Stop each service, run the database synchronization command if necessary to update the associated database schema, and restart each service to apply the new configuration. Some services require additional commands:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:558(term) ./doc/openstack-ops/ch_ops_upgrades.xml:966(term)
msgid "OpenStack Identity"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:569(term) ./doc/openstack-ops/ch_ops_upgrades.xml:977(term)
msgid "OpenStack Image service"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:581(term) ./doc/openstack-ops/ch_ops_upgrades.xml:989(term)
msgid "OpenStack Compute"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:601(term) ./doc/openstack-ops/ch_ops_upgrades.xml:1009(term)
msgid "OpenStack Block Storage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:613(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1021(para)
msgid "The controller node update is complete. Now you can upgrade the compute nodes."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:618(title) ./doc/openstack-ops/ch_ops_upgrades.xml:1026(title)
msgid "Upgrade Packages and Restart Services on the Compute Nodes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:621(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1029(para)
msgid "Upgrade packages on the compute nodes to Havana:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:627(para) ./doc/openstack-ops/ch_ops_upgrades.xml:673(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1034(para)
msgid "Make sure you have removed the repository for Grizzly packages and added the repository for Havana packages."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:633(para)
msgid "Due to a packaging issue, this command might fail with the following error:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:645(para)
msgid "Fix this issue by running this command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:650(para)
msgid "The packaging system prompts you to update the <filename>/etc/nova/api-paste.ini</filename> file. As with the controller upgrade, we recommend that you reject these changes and review the <filename>.dpkg-dist</filename> file after the upgrade process completes."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:656(para)
msgid "To restart compute services:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:664(title) ./doc/openstack-ops/ch_ops_upgrades.xml:1052(title)
msgid "Upgrade Packages and Restart Services on the Block Storage Nodes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:667(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1055(para)
msgid "Upgrade packages on the storage nodes to Havana:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:678(para)
msgid "The packaging system prompts you to update the <filename>/etc/cinder/api-paste.ini</filename> file. Like the controller upgrade, reject these changes and review the <filename>.dpkg-dist</filename> file after the the upgrade process completes."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:686(para)
msgid "To restart Block Storage services:<indexterm class=\"endofrange\" startref=\"UPubuntu\"/>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:694(title)
msgid "How to Perform an Upgrade from Grizzly to Havana—Red Hat Enterprise Linux and Derivatives"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:699(para)
msgid "For this section, we assume that you are starting with the architecture provided in the OpenStack <link href=\"http://docs.openstack.org/havana/install-guide/install/yum/content/\"><citetitle>OpenStack Installation Guide</citetitle></link> and upgrading to the same architecture for Havana. All nodes should run Red Hat Enterprise Linux 6.4 or compatible derivatives. Newer minor releases should also work. This section primarily addresses upgrading core OpenStack services, such as the Identity Service (keystone), Image service (glance), Compute (nova) including networking, Block Storage (cinder), and the dashboard.<indexterm class=\"startofrange\" xml:id=\"UPredhat\"><primary>upgrading</primary><secondary>Grizzly to Havana (Red Hat)</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:736(para)
msgid "First, save the configuration files on all nodes:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:748(para)
msgid "Next, back up all databases on the controller:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:770(para)
msgid "Consider checking for newer versions of the <link href=\"https://repos.fedorapeople.org/repos/openstack/EOL/openstack-havana/\">Havana repository</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:814(para)
msgid "If currently present, remove the following key from the [filter:authtoken] section in the <filename>/etc/glance/glance-api-paste.ini</filename> and <filename>/etc/glance/glance-registry-paste.ini</filename> files:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:833(para)
msgid "Remove defunct database configuration from the <filename>/etc/nova/nova.conf</filename> file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:852(para)
msgid "On all compute nodes, increase the DHCP lease time (measured in seconds) in the <filename>/etc/nova/nova.conf</filename> file to enable currently active instances to continue leasing their IP addresses during the upgrade process, as follows:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:869(para)
msgid "You must restart dnsmasq and the nova networking service to enable the new DHCP lease time:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:875(para)
msgid "Update the cinder configuration on the controller and storage nodes for compatibility with Havana."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:878(para)
msgid "Add the <literal>[database]</literal> section and associated key to the <filename>/etc/cinder/cinder.conf</filename> file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:885(para)
msgid "Remove defunct database configuration from the <filename>/etc/cinder/cinder.conf</filename> file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:899(para)
msgid "The dashboard installation procedure and configuration file changed substantially between Grizzly and Havana. Particularly, if you are running Django 1.5 or later, you must ensure that the <filename>/etc/openstack-dashboard/local_settings</filename> file contains a correctly configured <placeholder-1/> key that contains a list of host names recognized by the dashboard."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:929(para)
msgid "Upgrade packages on the controller node to Havana:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:934(para)
msgid "Some services might terminate with an error during the package upgrade process. If this error might cause a problem with your environment, consider stopping all services before upgrading them to Havana."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:940(para)
msgid "Install the OpenStack SELinux package on the controller node:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:946(para)
msgid "The package manager appends <filename>.rpmnew</filename> to the end of newer versions of existing configuration files. You should consider adopting conventions associated with the newer configuration files and merging them with your existing configuration files after completing the upgrade process."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1039(para)
msgid "Install the OpenStack SELinux package on the compute nodes:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1044(para)
msgid "Restart compute services:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1060(para)
msgid "Make sure you have removed the repository for Grizzly packages and added the repository for Havana packages.<indexterm class=\"endofrange\" startref=\"UPredhat\"/>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1066(para)
msgid "Install the OpenStack SELinux package on the storage nodes:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1071(para)
msgid "Restart Block Storage services:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1077(title)
msgid "How to Perform an Upgrade from Havana to Icehouse—Ubuntu"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1080(para)
msgid "For this section, we assume that you are starting with the architecture provided in the <link href=\"http://docs.openstack.org/havana/install-guide/install/apt/content/\"><citetitle>OpenStack Installation Guide</citetitle></link> and upgrading to the same architecture for Icehouse. All nodes should run Ubuntu 12.04 LTS with Linux kernel 3.11 and the latest Havana packages installed and operational. This section primarily addresses upgrading core OpenStack services such as Identity (keystone), Image service (glance), Compute (nova), Networking (neutron), Block Storage (cinder), and the dashboard. The Networking upgrade includes conversion from the Open vSwitch (OVS) plug-in to the Modular Layer 2 (M2) plug-in. This section does not cover the upgrade process from Ubuntu 12.04 LTS to Ubuntu 14.04 LTS."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1096(para)
msgid "The upgrade process interrupts management of your environment, including the dashboard. If you properly prepare for this upgrade, tenant instances should continue to operate normally. However, instances might experience intermittent network interruptions while the Networking service rebuilds virtual networking infrastructure."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1107(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1592(para)
msgid "Review the <link href=\"https://wiki.openstack.org/wiki/ReleaseNotes/Icehouse\">Icehouse Release Notes</link> before you upgrade to learn about new features that you might want to enable and deprecated features that you should disable."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1114(para)
msgid "Consider adopting conventions associated with newer configuration files and merging them with your existing configuration files after completing the upgrade process."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1119(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1607(para)
msgid "Icehouse disables file injection by default per the <link href=\"https://wiki.openstack.org/wiki/ReleaseNotes/Icehouse\">Icehouse Release Notes</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1123(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1611(para)
msgid "If you plan to deploy Icehouse in stages, you must disable file injection on all compute nodes that remain on Havana. This is done by editing the <filename>/etc/nova/nova-compute.conf</filename> file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1132(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1620(para)
msgid "You must convert the configuration for your environment contained in the <filename>/etc/neutron/neutron.conf</filename> and <filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename> files from OVS to ML2. For example, the <link href=\"http://docs.openstack.org/icehouse/install-guide/install/apt/content/\"><citetitle>OpenStack Installation Guide</citetitle></link> covers <link href=\"http://docs.openstack.org/icehouse/install-guide/install/apt/content/section_neutron-networking-ml2.html\">ML2 plug-in configuration</link> using GRE tunnels."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1143(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1631(para)
msgid "Keep the OVS plug-in packages and configuration files until you verify the upgrade."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1152(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1640(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2124(para)
msgid "Save the configuration files on all nodes:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1168(para)
msgid "Although not necessary, you should consider updating your MySQL server configuration as described in the <link href=\"http://docs.openstack.org/icehouse/install-guide/install/apt/content/basics-database-controller.html\">MySQL controller setup</link> section of the <link href=\"http://docs.openstack.org/icehouse/install-guide/install/apt/content/\"><citetitle>OpenStack Installation Guide</citetitle></link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1182(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1670(para)
msgid "Complete the following actions on all nodes."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1184(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1672(para)
msgid "Remove the repository for Havana packages:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1188(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1676(para)
msgid "Add the repository for Icehouse packages:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1192(para)
msgid "Disable any <link href=\"https://help.ubuntu.com/12.04/serverguide/automatic-updates.html\">automatic package updates</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1199(title) ./doc/openstack-ops/ch_ops_upgrades.xml:1691(title)
msgid "Upgrade the Controller Node"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1202(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1694(para)
msgid "Upgrade packages on the controller node to Icehouse:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1216(para)
msgid "When the package manager prompts you to update various configuration files, reject the changes. The package manager appends <filename>.dpkg-dist</filename> to newer versions of the configuration files. To find newer versions of configuration files, enter the following command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1226(title) ./doc/openstack-ops/ch_ops_upgrades.xml:1705(title)
msgid "Upgrade Each Service"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1227(para)
msgid "The upgrade procedure for each service generally requires that you stop the service, run the database synchronization command to update the associated database, and start the service to apply the new configuration. You will need administrator privileges to perform these procedures. Some services will require additional steps."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1234(title) ./doc/openstack-ops/ch_ops_upgrades.xml:1713(title)
msgid "Upgrade OpenStack Identity"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1236(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1715(para)
msgid "Edit the <filename>/etc/keystone/keystone.conf</filename> file for compatibility for Icehouse:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1239(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1283(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1718(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1762(para)
msgid "Add the <literal>[database]</literal> section."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1240(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1719(para)
msgid "Move the <placeholder-1/> key from the<literal>[sql]</literal> section to the <literal>[database]</literal> section."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1245(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1303(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1327(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1724(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1779(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1805(para)
msgid "Stop the services:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1248(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1307(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1335(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1409(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1456(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1728(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1785(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1813(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1888(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1909(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1953(para)
msgid "Upgrade the database:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1252(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2299(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2371(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2421(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2453(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2484(para)
msgid "Start the services."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1257(title)
msgid "Upgrade OpenStack Image service"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1258(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1737(para)
msgid "Before upgrading the Image service database, you must convert the character set for each table to UTF-8."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1260(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1351(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1829(para)
msgid "Use the MySQL client to execute the following commands:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1273(para)
msgid "Your environment might contain different or additional tables that you must also convert to UTF-8 by using similar commands."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1279(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1758(para)
msgid "Edit the <filename>/etc/glance/glance-api.conf</filename> and <filename>/etc/glance/glance-registry.conf</filename> files for compatibility with Icehouse:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1285(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1764(para)
msgid "Rename the <placeholder-1/> key to <placeholder-2/> and move it to the <literal>[database]</literal> section."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1291(para)
msgid "In the <filename>/etc/glance/glance-api.conf</filename> file, add RabbitMQ message broker keys to the <literal>[DEFAULT]</literal> section."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1297(replaceable) ./doc/openstack-ops/ch_ops_upgrades.xml:1433(replaceable) ./doc/openstack-ops/ch_ops_upgrades.xml:1437(replaceable) ./doc/openstack-ops/ch_ops_upgrades.xml:1777(replaceable) ./doc/openstack-ops/ch_ops_upgrades.xml:1930(replaceable) ./doc/openstack-ops/ch_ops_upgrades.xml:1934(replaceable) ./doc/openstack-ops/ch_ops_upgrades.xml:2224(replaceable) ./doc/openstack-ops/ch_ops_upgrades.xml:2233(replaceable) ./doc/openstack-ops/ch_ops_upgrades.xml:2441(replaceable) ./doc/openstack-ops/ch_ops_upgrades.xml:2532(replaceable) ./doc/openstack-ops/ch_ops_upgrades.xml:2541(replaceable) ./doc/openstack-ops/ch_ops_upgrades.xml:2629(replaceable) ./doc/openstack-ops/ch_ops_upgrades.xml:2638(replaceable) ./doc/openstack-ops/ch_ops_upgrades.xml:2794(replaceable) ./doc/openstack-ops/ch_ops_upgrades.xml:2803(replaceable)
msgid "controller"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1298(replaceable)
msgid "RABBIT_PASS"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1299(para)
msgid "Replace <replaceable>RABBIT_PASS</replaceable> with the password you chose for the <literal>guest</literal> account in <application>RabbitMQ</application>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1310(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1338(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1731(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1788(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1816(para)
msgid "Start the services:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1316(title)
msgid "Upgrading OpenStack Compute"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1317(para)
msgid "Edit the <filename>/etc/nova/nova.conf</filename> file and change the <placeholder-1/> key from <literal>nova.rpc.impl_kombu</literal> to <literal>rabbit</literal>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1321(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1799(para)
msgid "Edit the <filename>/etc/nova/api-paste.ini</filename> file and comment out or remove any keys in the <literal>[filter:authtoken]</literal> section beneath the <literal>paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory</literal> statement."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1348(title) ./doc/openstack-ops/ch_ops_upgrades.xml:1826(title)
msgid "Upgrade OpenStack Networking"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1349(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1827(para)
msgid "Before upgrading the Networking database, you must convert the character set for each table to UTF-8."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1388(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1866(para)
msgid "Your environment might use a different database name. Also, it might contain different or additional tables that you must also convert to UTF-8 by using similar commands."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1394(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1495(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1527(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1875(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1995(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2034(para)
msgid "Populate the <filename>/etc/neutron/plugins/ml2/ml2_conf.ini</filename> file with the equivalent configuration for your environment."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1397(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1879(para)
msgid "Do not edit the <filename>/etc/neutron/neutron.conf</filename> file until after the conversion steps."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1402(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1902(para)
msgid "Because the conversion script cannot roll back, you must perform a database backup prior to executing the following commands."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1406(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1906(para)
msgid "Stop the service:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1415(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1893(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1915(para)
msgid "Perform the conversion from OVS to ML2:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1418(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1896(para)
msgid "Replace <replaceable>NEUTRON_DBPASS</replaceable> with the password you chose for the database."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1422(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1919(para)
msgid "Edit the <filename>/etc/neutron/neutron.conf</filename> file to use the ML2 plug-in and enable network change notifications:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1435(replaceable) ./doc/openstack-ops/ch_ops_upgrades.xml:1932(replaceable)
msgid "SERVICE_TENANT_ID"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1436(replaceable) ./doc/openstack-ops/ch_ops_upgrades.xml:1933(replaceable)
msgid "NOVA_PASS"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1439(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1936(para)
msgid "Replace <replaceable>SERVICE_TENANT_ID</replaceable> with the service tenant identifier (id) in the Identity service and <replaceable>NOVA_PASS</replaceable> with the password you chose for the <literal>nova</literal> user in the Identity service."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1445(para)
msgid "Start Networking services:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1450(title) ./doc/openstack-ops/ch_ops_upgrades.xml:1947(title)
msgid "Upgrade OpenStack Block Storage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1451(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1885(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1948(para)
msgid "Stop services:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1459(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1956(para)
msgid "Start services:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1466(title) ./doc/openstack-ops/ch_ops_upgrades.xml:1963(title)
msgid "Update Dashboard"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1467(para)
msgid "Edit the <filename>/etc/openstack-dashboard/local_settings.py</filename> file, and change the <placeholder-1/> key from <literal>\"Member\"</literal> to <literal>\"_member_\"</literal>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1472(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1968(para)
msgid "Restart Dashboard services:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1478(title) ./doc/openstack-ops/ch_ops_upgrades.xml:1976(title)
msgid "Upgrade the Network Node"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1480(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1978(para)
msgid "Upgrade packages on the network node to Icehouse:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1482(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1514(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1547(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1980(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2019(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2059(para)
msgid "Make sure you have removed the repository for Havana packages and added the repository for Icehouse packages."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1489(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1521(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1989(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2028(para)
msgid "Edit the <filename>/etc/neutron/neutron.conf</filename> file to use the ML2 plug-in:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1498(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1531(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2003(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2043(para)
msgid "Clean the active OVS configuration:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1501(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1534(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2006(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2046(para)
msgid "Restart Networking services:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1510(title) ./doc/openstack-ops/ch_ops_upgrades.xml:2015(title)
msgid "Upgrade the Compute Nodes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1512(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2017(para)
msgid "Upgrade packages on the compute nodes to Icehouse:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1537(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2049(para)
msgid "Restart Compute services:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1543(title) ./doc/openstack-ops/ch_ops_upgrades.xml:2055(title)
msgid "Upgrade the Storage Nodes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1545(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2057(para)
msgid "Upgrade packages on the storage nodes to Icehouse:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1554(para)
msgid "Restart Block Storage services."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1562(title)
msgid "How to Perform an Upgrade from Havana to Icehouse—Red Hat Enterprise Linux and Derivatives"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1565(para)
msgid "For this section, we assume that you are starting with the architecture provided in the OpenStack <link href=\"http://docs.openstack.org/havana/install-guide/install/yum/content/\"><citetitle>OpenStack Installation Guide</citetitle></link> and upgrading to the same architecture for Icehouse. All nodes should run Red Hat Enterprise Linux 6.5 or compatible derivatives such as CentOS and Scientific Linux with the latest Havana packages installed and operational. This section primarily addresses upgrading core OpenStack services such as Identity (keystone), Image service (glance), Compute (nova), Networking (neutron), Block Storage (cinder), and the dashboard. The Networking upgrade procedure includes conversion from the Open vSwitch (OVS) plug-in to the Modular Layer 2 (ML2) plug-in."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1581(para)
msgid "The upgrade process interrupts management of your environment, including the dashboard. If you properly prepare for this upgrade, tenant instances continue to operate normally. However, instances might experience intermittent network interruptions while the Networking service rebuilds virtual networking infrastructure."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1599(para)
msgid "Consider adopting conventions associated with newer configuration files and merging them with your existing configuration files after completing the upgrade process. You can find newer versions of existing configuration files with the following command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1656(para)
msgid "You must update your MySQL server configuration and restart the service as described in the <link href=\"http://docs.openstack.org/icehouse/install-guide/install/yum/content/basics-database-controller.html\">MySQL controller setup</link> section of the <link href=\"http://docs.openstack.org/icehouse/install-guide/install/yum/content/\"><citetitle>OpenStack Installation Guide</citetitle></link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1681(para)
msgid "Disable any automatic package updates."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1683(para)
msgid "You should check for newer versions of the <link href=\"http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/\">Icehouse repository</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1697(para)
msgid "The package manager appends <filename>.rpmnew</filename> to the end of newer versions of existing configuration files."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1706(para)
msgid "The upgrade procedure for each service typically requires that you stop the service, run the database synchronization command to update the associated database, and start the service to apply the new configuration. You will need administrator privileges for these procedures. Some services will require additional steps."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1736(title)
msgid "OpenStack Image service:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1739(para)
msgid "Use the MySQL client to run the following commands:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1752(para)
msgid "Your environment might contain different or additional tables that you must convert to UTF-8 by using similar commands."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1770(para)
msgid "Edit the <filename>/etc/glance/glance-api.conf</filename> file, and add the Qpid message broker keys to the <literal>[DEFAULT]</literal> section:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1780(para)
msgid "Stop services, upgrade the database, and start services:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1794(title)
msgid "Upgrading OpenStack Compute:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1795(para)
msgid "Edit the <filename>/etc/nova/nova.conf</filename> file and change the <placeholder-1/> key from <literal>nova.openstack.common.rpc.impl_qpid</literal> to <literal>qpid</literal>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1872(para) ./doc/openstack-ops/ch_ops_upgrades.xml:1986(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2025(para)
msgid "Install the ML2 plug-in package:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1882(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2039(para)
msgid "Change the <filename>/etc/neutron/plugin.ini</filename> symbolic link to reference <filename>/etc/neutron/plugins/ml2/ml2_conf.ini</filename>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1942(para)
msgid "Start Networking services."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1964(para)
msgid "Edit the <filename>/etc/openstack-dashboard/local_settings</filename> file and change the <placeholder-1/> key from <literal>\"Member\"</literal> to <literal>\"_member_\"</literal> ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1972(para)
msgid "The controller node update is complete. Now you can upgrade the remaining nodes."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:1999(para)
msgid "Change the <filename>/etc/neutron/plugin.ini</filename> symbolic link to reference <filename>/etc/neutron/plugins/ml2/ml2_conf.ini</filename> ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2065(para)
msgid "Restart Block Storage service:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2073(title)
msgid "How to Perform an Upgrade from Icehouse to Juno"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2075(para)
msgid "Use this procedure to upgrade a basic operational deployment of the following services: Identity (keystone), Image service (glance), Compute (nova), Networking (neutron), dashboard (horizon), Block Storage (cinder), Orchestration (heat), and Telemetry (ceilometer). This procedure references the basic three-node architecture in the <link href=\"http://docs.openstack.org/icehouse/install-guide/install/apt/content/\"><citetitle>OpenStack Installation Guide</citetitle></link>. All nodes must run a supported distribution of Linux with a recent kernel and latest Icehouse packages."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2085(title)
msgid "Before you begin"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2088(para)
msgid "The upgrade process interrupts management of your environment including the dashboard. If you properly prepare for the upgrade, existing instances, networking, and storage should continue to operate. However, instances might experience intermittent network interruptions."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2095(para)
msgid "Review the <link href=\"http://wiki.openstack.org/wiki/ReleaseNotes/Juno\">release notes</link> before upgrading to learn about new, updated, and deprecated features."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2101(para)
msgid "Consider adopting structure and options from Juno service configuration files and merging them with existing configuration files. The <link href=\"http://docs.openstack.org/juno/config-reference/content/\"><citetitle>OpenStack Configuration Reference</citetitle></link> contains new, updated, and deprecated options for most services."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2110(para)
msgid "For environments using the OpenStack Networking (neutron) service, verify the Icehouse version of the database:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2121(title)
msgid "Perform a backup"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2140(para)
msgid "Consider updating your SQL server configuration as described in the <link href=\"http://docs.openstack.org/juno/install-guide/install/apt/content/\">OpenStack Installation Guide</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2149(title)
msgid "Manage repositories"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2151(para)
msgid "Complete the following steps on all nodes."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2153(para)
msgid "Remove the repository for Icehouse packages."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2156(para)
msgid "On Ubuntu, follow these steps:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2159(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2182(para)
msgid "Add the repository for Juno packages:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2163(para)
msgid "Remove any Ubuntu Cloud archive repositories for Icehouse packages. You might also need to install or update the <literal>ubuntu-cloud-keyring</literal> package."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2169(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2186(para)
msgid "Update the repository database."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2174(para)
msgid "On Red Hat Enterprise Linux (RHEL), CentOS, and Fedora, follow these steps:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2178(para)
msgid "Remove the repository for Icehouse packages:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2193(title)
msgid "Controller nodes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2195(title) ./doc/openstack-ops/ch_ops_upgrades.xml:2500(title) ./doc/openstack-ops/ch_ops_upgrades.xml:2597(title) ./doc/openstack-ops/ch_ops_upgrades.xml:2765(title)
msgid "Upgrade packages to Juno"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2196(para)
msgid "Depending on your specific configuration, upgrading all packages might restart or break services supplemental to your OpenStack environment. For example, if you use the TGT iSCSI framework for Block Storage volumes and the upgrade includes new packages for it, the package manager might restart the TGT iSCSI services and impact connectivity to volumes."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2202(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2510(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2607(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2772(para)
msgid "If the package manager prompts you to update configuration files, reject the changes. The package manager appends a suffix to newer versions of configuration files. Consider reviewing and adopting content from these files."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2208(title) ./doc/openstack-ops/ch_ops_upgrades.xml:2516(title) ./doc/openstack-ops/ch_ops_upgrades.xml:2613(title) ./doc/openstack-ops/ch_ops_upgrades.xml:2778(title)
msgid "Update services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2209(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2517(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2614(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2779(para)
msgid "To update a service, you generally modify one or more configuration files, stop the service, synchronize the database schema, and start the service. Some services require different steps. We recommend verifying operation of each service before proceeding to the next service."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2215(title) ./doc/openstack-ops/ch_ops_upgrades.xml:2523(title) ./doc/openstack-ops/ch_ops_upgrades.xml:2620(title) ./doc/openstack-ops/ch_ops_upgrades.xml:2785(title)
msgid "All services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2216(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2524(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2621(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2786(para)
msgid "These configuration changes apply to all services."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2218(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2526(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2623(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2788(para)
msgid "In any file containing the <literal>[keystone_authtoken]</literal> section, modify Identity service access to use the <placeholder-1/> option:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2225(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2533(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2630(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2795(para)
msgid "Comment out any <literal>auth_host</literal>, <literal>auth_port</literal>, and <literal>auth_protocol</literal> options because the <literal>identity_uri</literal> option replaces them."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2231(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2539(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2636(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2801(para)
msgid "In any file containing the <placeholder-1/> option, modify it to explicitly use version 2.0:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2237(title)
msgid "Identity service"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2239(para)
msgid "Edit the <filename>/etc/keystone/keystone.conf</filename> file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2243(para)
msgid "In the <literal>[token]</literal> section, configure the UUID token provider and SQL driver:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2253(para)
msgid "Stop the service."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2256(para)
msgid "Clear expired tokens:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2260(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2295(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2367(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2416(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2449(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2480(para)
msgid "Synchronize the database schema:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2264(para)
msgid "Start the service."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2268(title)
msgid "Image service"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2270(para)
msgid "Edit the <filename>/etc/glance/glance-api.conf</filename> file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2274(para)
msgid "Move the following options from the <literal>[DEFAULT]</literal> section to the <literal>[glance_store]</literal> section:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2286(para)
msgid "These options must contain values."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2292(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2364(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2413(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2446(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2477(para)
msgid "Stop the services."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2303(title) ./doc/openstack-ops/ch_ops_upgrades.xml:2642(title)
msgid "Compute service"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2305(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2644(para)
msgid "Edit the <filename>/etc/nova/nova.conf</filename> file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2309(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2648(para)
msgid "In the <literal>[DEFAULT]</literal> section, rename the <placeholder-1/> option to <placeholder-2/> and move it to the <literal>[glance]</literal> section."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2315(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2654(para)
msgid "In the <literal>[DEFAULT]</literal> section, rename the following options and move them to the <literal>[neutron]</literal> section:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2321(th) ./doc/openstack-ops/ch_ops_upgrades.xml:2660(th)
msgid "Old options"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2322(th) ./doc/openstack-ops/ch_ops_upgrades.xml:2661(th)
msgid "New options"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2377(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2547(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2715(para)
msgid "Edit the <filename>/etc/neutron/neutron.conf</filename> file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2381(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2388(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2468(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2551(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2558(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2719(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2726(para)
msgid "In the <literal>[DEFAULT]</literal> section, change the value of the <placeholder-1/> option:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2383(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2553(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2721(para)
msgid "<literal>neutron.openstack.common.rpc.impl_kombu</literal> becomes <literal>rabbit</literal>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2390(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2560(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2728(para)
msgid "<literal>neutron.plugins.ml2.plugin.Ml2Plugin</literal> becomes <literal>ml2</literal>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2395(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2565(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2733(para)
msgid "In the <literal>[DEFAULT]</literal> section, change the value or values of the <placeholder-1/> option to use short names. For example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2398(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2568(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2736(para)
msgid "<literal>neutron.services.l3_router.l3_router_plugin.L3RouterPlugin</literal> becomes <literal>router</literal>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2403(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2573(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2741(para)
msgid "In the <literal>[DEFAULT]</literal> section, explicitly define a value for the <placeholder-1/> option. For example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2425(title) ./doc/openstack-ops/ch_arch_cloud_controller.xml:660(title)
msgid "Dashboard"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2426(para)
msgid "In typical environments, updating the dashboard only requires restarting the services."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2429(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2492(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2589(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2709(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2757(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2811(para)
msgid "Restart the services."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2433(title) ./doc/openstack-ops/ch_ops_upgrades.xml:2807(title)
msgid "Block Storage service"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2435(para)
msgid "Edit the <filename>/etc/cinder/cinder.conf</filename> file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2439(para)
msgid "In the <literal>[DEFAULT]</literal> section, add the following option:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2457(title)
msgid "Orchestration service"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2459(para)
msgid "Create the <literal>heat_stack_owner</literal> role if it does not exist:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2464(para)
msgid "Edit the <filename>/etc/heat/heat.conf</filename> file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2470(para)
msgid "<literal>heat.openstack.common.rpc.impl_kombu</literal> becomes <literal>rabbit</literal>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2488(title)
msgid "Telemetry service"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2489(para)
msgid "In typical environments, updating the Telemetry service only requires restarting the services."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2498(title)
msgid "Network nodes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2501(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2598(para)
msgid "Explicitly install the <literal>ipset</literal> package if your distribution does not install it as a dependency."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2504(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2601(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2766(para)
msgid "Depending on your specific configuration, upgrading all packages might restart or break services supplemental to your OpenStack environment. For example, if you use the TGT iSCSI framework for Block Storage volumes and the upgrade includes new packages for it, the package manager might restart the TGT iSCSI services and impact access to volumes."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2581(para) ./doc/openstack-ops/ch_ops_upgrades.xml:2749(para)
msgid "In the <literal>[database]</literal> section, remove any <placeholder-1/> options because the Networking service uses the message queue instead of direct access to the database."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2595(title) ./doc/openstack-ops/ch_ops_log_monitor.xml:109(para) ./doc/openstack-ops/ch_ops_log_monitor.xml:117(para)
msgid "Compute nodes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2701(para)
msgid "In the <literal>[database]</literal> section, remove any <placeholder-1/> options because the Compute service uses the message queue instead of direct access to the database."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2763(title)
msgid "Storage nodes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2808(para)
msgid "In typical environments, updating the Block Storage service only requires restarting the services."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2819(title)
msgid "Cleaning Up and Final Configuration File Updates"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2820(para)
msgid "On all distributions, you must perform some final tasks to complete the upgrade process.<indexterm class=\"singular\"><primary>upgrading</primary><secondary>final steps</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2826(para)
msgid "Decrease DHCP timeouts by modifying <filename>/etc/nova/nova.conf</filename> on the compute nodes back to the original value for your environment."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2829(para)
msgid "Update all <filename>.ini</filename> files to match passwords and pipelines as required for Havana in your environment."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2832(para)
msgid "After migration, users see different results from <placeholder-1/> and <placeholder-2/>. To ensure users see the same images in the list commands, edit the <filename>/etc/glance/policy.json</filename> and <filename>/etc/nova/policy.json</filename> files to contain <code>\"context_is_admin\": \"role:admin\"</code>, which limits access to private images for projects."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2839(para)
msgid "Thoroughly test the environment. Then, let your users know that their cloud is running normally again."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2845(title)
msgid "Rolling Back a Failed Upgrade"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2847(para)
msgid "Upgrades involve complex operations and can fail. Before attempting any upgrade, you should make a full database backup of your production data. As of Kilo, database downgrades are not supported, and the only method available to get back to a prior database version will be to restore from backup."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2853(para)
msgid "This section provides guidance for rolling back to a previous release of OpenStack. Although only tested on Ubuntu, other distributions follow a similar <phrase role=\"keep-together\">procedure</phrase>.<indexterm class=\"singular\"><primary>rollbacks</primary><secondary>process for</secondary></indexterm><indexterm class=\"singular\"><primary>upgrading</primary><secondary>rolling back failures</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2864(para)
msgid "In this section, we consider only the most immediate case: you have taken down production management services in preparation for an upgrade, completed part of the upgrade process, discovered one or more problems not encountered during testing, and you must roll back your environment to the original \"known good\" state. Make sure that you did not make any state changes after attempting the upgrade process: no new instances, networks, storage volumes, and so on. Any of these new resources will be in a zombie state after the databases are restored from backup."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2875(para)
msgid "Within this scope, you must complete these steps to successfully roll back your environment:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2880(para)
msgid "Roll back configuration files."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2884(para)
msgid "Restore databases from backup."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2888(para)
msgid "Roll back packages."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2892(para)
msgid "The upgrade instructions provided in earlier sections ensure that you have proper backups of your databases and configuration files. Read through this section carefully and verify that you have the requisite backups to restore. Rolling back upgrades is a tricky process because distributions tend to put much more effort into testing upgrades than downgrades. Broken downgrades often take significantly more effort to troubleshoot and, hopefully, resolve than broken upgrades. Only you can weigh the risks of trying to push a failed upgrade forward versus rolling it back. Generally, consider rolling back as the very last option."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2904(para)
msgid "The following steps described for Ubuntu have worked on at least one production environment, but they might not work for all environments."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2909(title)
msgid "To perform the rollback from Havana to Grizzly"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2912(para)
msgid "Stop all OpenStack services."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2916(para)
msgid "Copy contents of configuration backup directories <filename>/etc/&lt;service&gt;.grizzly</filename> that you created during the upgrade process back to <filename>/etc/&lt;service&gt;</filename>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2923(para)
msgid "Restore databases from the <filename>grizzly-db-backup.sql</filename> backup file that you created with the <placeholder-1/> command during the upgrade process:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2930(para)
msgid "If you created this backup by using the <placeholder-1/> flag as instructed, you can proceed to the next step. If you omitted this flag, MySQL reverts all tables that existed in Grizzly, but does not drop any tables created during the database migration for Havana. In this case, you must manually determine which tables to drop, and drop them to prevent issues with your next upgrade <phrase role=\"keep-together\">attempt</phrase>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2942(para)
msgid "Downgrade OpenStack packages."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2945(para)
msgid "Downgrading packages is by far the most complicated step; it is highly dependent on the distribution and the overall administration of the system."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:2952(para)
msgid "Determine which OpenStack packages are installed on your system. Use the <placeholder-1/> command. Filter for OpenStack packages, filter again to omit packages explicitly marked in the <code>deinstall</code> state, and save the final output to a file. For example, the following command covers a controller node with keystone, glance, nova, neutron, and cinder:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:3000(para)
msgid "Depending on the type of server, the contents and order of your package list might vary from this example."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:3007(para)
msgid "You can determine the package versions available for reversion by using the <placeholder-1/> command. If you removed the Grizzly repositories, you must first reinstall them and run <placeholder-2/>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:3038(para)
msgid "This tells us the currently installed version of the package, newest candidate version, and all versions along with the repository that contains each version. Look for the appropriate Grizzly version—<code>1:2013.1.4-0ubuntu1~cloud0</code> in this case. The process of manually picking through this list of packages is rather tedious and prone to errors. You should consider using the following script to help with this process:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:3090(para)
msgid "If you decide to continue this step manually, don't forget to change <code>neutron</code> to <code>quantum</code> where applicable."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:3097(para)
msgid "Use the <placeholder-1/> command to install specific versions of each package by specifying <code>&lt;package-name&gt;=&lt;version&gt;</code>. The script in the previous step conveniently created a list of <code>package=version</code> pairs for you:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:3106(para)
msgid "This step completes the rollback procedure. You should remove the Havana repository and run <placeholder-1/> to prevent accidental upgrades until you solve whatever issue caused you to roll back your environment."
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/preface_ops.xml:584(None)
msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_00in01.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:12(title)
msgid "Preface"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:14(para)
msgid "OpenStack is an open source platform that lets you build an Infrastructure as a Service (IaaS) cloud that runs on commodity hardware."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:19(title)
msgid "Introduction to OpenStack"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:21(para)
msgid "OpenStack believes in open source, open design, open development, all in an open community that encourages participation by anyone. The long-term vision for OpenStack is to produce a ubiquitous open source cloud computing platform that meets the needs of public and private cloud providers regardless of size. OpenStack services control large pools of compute, storage, and networking resources throughout a data center."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:30(para)
msgid "The technology behind OpenStack consists of a series of interrelated projects delivering various components for a cloud infrastructure solution. Each service provides an open API so that all of these resources can be managed through a dashboard that gives administrators control while empowering users to provision resources through a web interface, a command-line client, or software development kits that support the API. Many OpenStack APIs are extensible, meaning you can keep compatibility with a core set of calls while providing access to more resources and innovating through API extensions. The OpenStack project is a global collaboration of developers and cloud computing technologists. The project produces an open standard cloud computing platform for both public and private clouds. By focusing on ease of implementation, massive scalability, a variety of rich features, and tremendous extensibility, the project aims to deliver a practical and reliable cloud solution for all types of organizations."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:50(title)
msgid "Getting Started with OpenStack"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:52(para)
msgid "As an open source project, one of the unique aspects of OpenStack is that it has many different levels at which you can begin to engage with it—you don't have to do everything yourself."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:58(title)
msgid "Using OpenStack"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:60(para)
msgid "You could ask, \"Do I even need to build a cloud?\" If you want to start using a compute or storage service by just swiping your credit card, you can go to eNovance, HP, Rackspace, or other organizations to start using their public OpenStack clouds. Using their OpenStack cloud resources is similar to accessing the publicly available Amazon Web Services Elastic Compute Cloud (EC2) or Simple Storage Solution (S3)."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:71(title)
msgid "Plug and Play OpenStack"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:73(para)
msgid "However, the enticing part of OpenStack might be to build your own private cloud, and there are several ways to accomplish this goal. Perhaps the simplest of all is an appliance-style solution. You purchase an appliance, unpack it, plug in the power and the network, and watch it transform into an OpenStack cloud with minimal additional configuration. Few, if any, other open source cloud products have such turnkey options. If a turnkey solution is interesting to you, take a look at Nebula One."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:82(para)
msgid "However, hardware choice is important for many applications, so if that applies to you, consider that there are several software distributions available that you can run on servers, storage, and network products of your choosing. Canonical (where OpenStack replaced Eucalyptus as the default cloud option in 2011), Red Hat, and SUSE offer enterprise OpenStack solutions and support. You may also want to take a look at some of the specialized distributions, such as those from Rackspace, Piston, SwiftStack, or Cloudscaling."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:91(para)
msgid "Alternatively, if you want someone to help guide you through the decisions about the underlying hardware or your applications, perhaps adding in a few features or integrating components along the way, consider contacting one of the system integrators with OpenStack experience, such as Mirantis or Metacloud."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:97(para)
msgid "If your preference is to build your own OpenStack expertise internally, a good way to kick-start that might be to attend or arrange a training session. The OpenStack Foundation recently launched a <link href=\"http://www.openstack.org/marketplace/training\">Training Marketplace</link> where you can look for nearby events. Also, the OpenStack community is <link href=\"https://wiki.openstack.org/wiki/Training-manuals\">working to produce</link> open source training materials."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:107(title)
msgid "Roll Your Own OpenStack"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:109(para)
msgid "However, this guide has a different audience—those seeking flexibility from the OpenStack framework by conducting do-it-yourself solutions."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:113(para)
msgid "OpenStack is designed for horizontal scalability, so you can easily add new compute, network, and storage resources to grow your cloud over time. In addition to the pervasiveness of massive OpenStack public clouds, many organizations, such as PayPal, Intel, and Comcast, build large-scale private clouds. OpenStack offers much more than a typical software package because it lets you integrate a number of different technologies to construct a cloud. This approach provides great flexibility, but the number of options might be daunting at first."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:126(title)
msgid "Who This Book Is For"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:127(para)
msgid "This book is for those of you starting to run OpenStack clouds as well as those of you who were handed an operational one and want to keep it running well. Perhaps you're on a DevOps team, perhaps you are a system administrator starting to dabble in the cloud, or maybe you want to get on the OpenStack cloud team at your company. This book is for all of you."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:133(para)
msgid "This guide assumes that you are familiar with a Linux distribution that supports OpenStack, SQL databases, and virtualization. You must be comfortable administering and configuring multiple Linux machines for networking. You must install and maintain an SQL database and occasionally run queries against it."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:139(para)
msgid "One of the most complex aspects of an OpenStack cloud is the networking configuration. You should be familiar with concepts such as DHCP, Linux bridges, VLANs, and iptables. You must also have access to a network hardware expert who can configure the switches and routers required in your OpenStack cloud."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:145(para)
msgid "Cloud computing is a quite advanced topic, and this book requires a lot of background knowledge. However, if you are fairly new to cloud computing, we recommend that you make use of the <xref linkend=\"openstack_glossary\"/> at the back of the book, as well as the online documentation for OpenStack and additional resources mentioned in this book in <xref linkend=\"recommended-reading\"/>."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:154(title)
msgid "Further Reading"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:155(para)
msgid "There are other books on the <link href=\"http://docs.openstack.org\">OpenStack documentation website</link> that can help you get the job done."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:160(title)
msgid "OpenStack Guides"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:162(term)
msgid "OpenStack Installation Guides"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:164(para)
msgid "Describes a manual installation process, as in, by hand, without automation, for multiple distributions based on a packaging system:"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:191(link)
msgid "OpenStack Configuration Reference"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:195(para)
msgid "Contains a reference listing of all configuration options for core and integrated OpenStack services by release version"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:206(para)
msgid "Contains how-to information for managing an OpenStack cloud as needed for your use cases, such as storage, computing, or software-defined-networking"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:214(link)
msgid "OpenStack High Availability Guide"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:218(para)
msgid "Describes potential strategies for making your OpenStack services and related controllers and data stores highly available"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:225(link)
msgid "OpenStack Security Guide"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:229(para)
msgid "Provides best practices and conceptual information about securing an OpenStack cloud"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:235(link)
msgid "Virtual Machine Image Guide"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:239(para)
msgid "Shows you how to obtain, create, and modify virtual machine images that are compatible with OpenStack"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:245(link)
msgid "OpenStack End User Guide"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:249(para)
msgid "Shows OpenStack end users how to create and manage resources in an OpenStack cloud with the OpenStack dashboard and OpenStack client commands"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:256(link)
msgid "OpenStack Admin User Guide"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:260(para)
msgid "Shows OpenStack administrators how to create and manage resources in an OpenStack cloud with the OpenStack dashboard and OpenStack client <phrase role=\"keep-together\">commands</phrase>"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:268(link)
msgid "OpenStack API Quick Start"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:272(para)
msgid "A brief overview of how to send REST API requests to endpoints for OpenStack services"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:281(title)
msgid "How This Book Is Organized"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:283(para)
msgid "This book is organized in two parts: the architecture decisions for designing OpenStack clouds and the repeated operations for running OpenStack clouds."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:287(emphasis)
msgid "Part I:"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:294(para)
msgid "Because of all the decisions the other chapters discuss, this chapter describes the decisions made for this particular book and much of the justification for the example architecture."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:305(para)
msgid "While this book doesn't describe installation, we do recommend automation for deployment and configuration, discussed in this chapter."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:315(para)
msgid "The cloud controller is an invention for the sake of consolidating and describing which services run on which nodes. This chapter discusses hardware and network considerations as well as how to design the cloud controller for performance and separation of services."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:328(para)
msgid "This chapter describes the compute nodes, which are dedicated to running virtual machines. Some hardware choices come into play here, as well as logging and networking descriptions."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:339(para)
msgid "This chapter discusses the growth of your cloud resources through scaling and segregation considerations."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:349(para)
msgid "As with other architecture decisions, storage concepts within OpenStack take a lot of consideration, and this chapter lays out the choices for you."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:359(para)
msgid "Your OpenStack cloud networking needs to fit into your existing networks while also enabling the best design for your users and administrators, and this chapter gives you in-depth information about networking decisions."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:367(emphasis)
msgid "Part II:"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:374(para)
msgid "This chapter is written to let you get your hands wrapped around your OpenStack cloud through command-line tools and understanding what is already set up in your cloud."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:385(para)
msgid "This chapter walks through user-enabling processes that all admins must face to manage users, give them quotas to parcel out resources, and so on."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:395(para)
msgid "This chapter shows you how to use OpenStack cloud resources and train your users as well."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:404(para)
msgid "This chapter goes into the common failures that the authors have seen while running clouds in production, including troubleshooting."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:414(para)
msgid "Because network troubleshooting is especially difficult with virtual resources, this chapter is chock-full of helpful tips and tricks for tracing network traffic, finding the root cause of networking failures, and debugging related services, such as DHCP and DNS."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:427(para)
msgid "This chapter shows you where OpenStack places logs and how to best read and manage logs for monitoring purposes."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:437(para)
msgid "This chapter describes what you need to back up within OpenStack as well as best practices for recovering backups."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:447(para)
msgid "For readers who need to get a specialized feature into OpenStack, this chapter describes how to use DevStack to write custom middleware or a custom scheduler to rebalance your resources."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:458(para)
msgid "Because OpenStack is so, well, open, this chapter is dedicated to helping you navigate the community and find out where you can help and where you can get help."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:468(para)
msgid "Much of OpenStack is driver-oriented, so you can plug in different solutions to the base set of services. This chapter describes some advanced configuration <phrase role=\"keep-together\">topics</phrase>."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:479(para)
msgid "This chapter provides upgrade information based on the architectures used in this book."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:487(emphasis)
msgid "Back matter:"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:494(para)
msgid "You can read a small selection of use cases from the OpenStack community with some technical details and further resources."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:504(para)
msgid "These are shared legendary tales of image disappearances, VM massacres, and crazy troubleshooting techniques to share those hard-learned lessons and <phrase role=\"keep-together\">wisdom</phrase>."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:515(para)
msgid "Read about how to track the OpenStack roadmap through the open and transparent development processes."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:524(para)
msgid "So many OpenStack resources are available online because of the fast-moving nature of the project, but there are also resources listed here that the authors found helpful while learning themselves."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:535(para)
msgid "A list of terms used in this book is included, which is a subset of the larger OpenStack glossary available online."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:544(title)
msgid "Why and How We Wrote This Book"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:546(para)
msgid "We wrote this book because we have deployed and maintained OpenStack clouds for at least a year, and wanted to be able to distribute this knowledge to others. After months of being the point people for an OpenStack cloud, we also wanted to have a document to hand to our system administrators so that they'd know how to operate the cloud on a daily basis—both reactively and pro-actively. We wanted to provide more detailed technical information about the decisions that deployers make along the way."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:558(para)
msgid "Design and create an architecture for your first nontrivial OpenStack cloud. After you read this guide, you'll know which questions to ask and how to organize your compute, networking, and storage resources and the associated software packages."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:566(para)
msgid "Perform the day-to-day tasks required to administer a cloud."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:556(para)
msgid "We wrote this book to help you:<placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:571(para)
msgid "We wrote this book in a book sprint, which is a facilitated, rapid development production method for books. For more information, see the <link href=\"http://www.booksprints.net/\">BookSprints site</link>. Your authors cobbled this book together in five days during February 2013, fueled by caffeine and the best takeout food that Austin, Texas, could <phrase role=\"keep-together\">offer</phrase>."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:579(para)
msgid "On the first day, we filled white boards with colorful sticky notes to start to shape this nebulous book about how to architect and operate clouds:<placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:589(para)
msgid "We wrote furiously from our own experiences and bounced ideas between each other. At regular intervals we reviewed the shape and organization of the book and further molded it, leading to what you see today."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:594(para)
msgid "The team includes:"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:598(term)
msgid "Tom Fifield"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:601(para)
msgid "After learning about scalability in computing from particle physics experiments, such as ATLAS at the Large Hadron Collider (LHC) at CERN, Tom worked on OpenStack clouds in production to support the Australian public research sector. Tom currently serves as an OpenStack community manager and works on OpenStack documentation in his spare time."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:612(term)
msgid "Diane Fleming"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:615(para)
msgid "Diane works on the OpenStack API documentation tirelessly. She helped out wherever she could on this project."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:622(term)
msgid "Anne Gentle"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:625(para)
msgid "Anne is the documentation coordinator for OpenStack and also served as an individual contributor to the Google Documentation Summit in 2011, working with the Open Street Maps team. She has worked on book sprints in the past, with FLOSS Manuals Adam Hyde facilitating. Anne lives in Austin, Texas."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:635(term)
msgid "Lorin Hochstein"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:638(para)
msgid "An academic turned software-developer-slash-operator, Lorin worked as the lead architect for Cloud Services at Nimbis Services, where he deploys OpenStack for technical computing applications. He has been working with OpenStack since the Cactus release. Previously, he worked on high-performance computing extensions for OpenStack at University of Southern California's Information Sciences Institute (USC-ISI)."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:650(term)
msgid "Adam Hyde"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:653(para)
msgid "Adam facilitated this book sprint. He also founded the books sprint methodology and is the most experienced book-sprint facilitator around. See <link href=\"http://www.booksprints.net\"/> for more information. Adam founded FLOSS Manuals—a community of some 3,000 individuals developing Free Manuals about Free Software. He is also the founder and project manager for Booktype, an open source project for writing, editing, and publishing books online and in print."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:666(term)
msgid "Jonathan Proulx"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:669(para)
msgid "Jon has been piloting an OpenStack cloud as a senior technical architect at the MIT Computer Science and Artificial Intelligence Lab for his researchers to have as much computing power as they need. He started contributing to OpenStack documentation and reviewing the documentation so that he could accelerate his learning."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:679(term)
msgid "Everett Toews"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:682(para)
msgid "Everett is a developer advocate at Rackspace making OpenStack and the Rackspace Cloud easy to use. Sometimes developer, sometimes advocate, and sometimes operator, he's built web applications, taught workshops, given presentations around the world, and deployed OpenStack for production use by academia and business."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:692(term)
msgid "Joe Topjian"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:695(para)
msgid "Joe has designed and deployed several clouds at Cybera, a nonprofit where they are building e-infrastructure to support entrepreneurs and local researchers in Alberta, Canada. He also actively maintains and operates these clouds as a systems architect, and his experiences have generated a wealth of troubleshooting skills for cloud environments."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:706(term)
msgid "OpenStack community members"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:709(para)
msgid "Many individual efforts keep a community book alive. Our community members updated content for this book year-round. Also, a year after the first sprint, Jon Proulx hosted a second two-day mini-sprint at MIT with the goal of updating the book for the latest release. Since the book's inception, more than 30 contributors have supported this book. We have a tool chain for reviews, continuous builds, and translations. Writers and developers continuously review patches, enter doc bugs, edit content, and fix doc bugs. We want to recognize their efforts!"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:721(para)
msgid "The following people have contributed to this book: Akihiro Motoki, Alejandro Avella, Alexandra Settle, Andreas Jaeger, Andy McCallum, Benjamin Stassart, Chandan Kumar, Chris Ricker, David Cramer, David Wittman, Denny Zhang, Emilien Macchi, Gauvain Pocentek, Ignacio Barrio, James E. Blair, Jay Clark, Jeff White, Jeremy Stanley, K Jonathan Harker, KATO Tomoyuki, Lana Brindley, Laura Alves, Lee Li, Lukasz Jernas, Mario B. Codeniera, Matthew Kassawara, Michael Still, Monty Taylor, Nermina Miller, Nigel Williams, Phil Hopkins, Russell Bryant, Sahid Orentino Ferdjaoui, Sandy Walsh, Sascha Peilicke, Sean M. Collins, Sergey Lukjanov, Shilla Saebi, Stephen Gordon, Summer Long, Uwe Stuehler, Vaibhav Bhatkar, Veronica Musso, Ying Chun \"Daisy\" Guo, Zhengguang Ou, and ZhiQiang Fan."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:742(title)
msgid "How to Contribute to This Book"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:744(para)
msgid "The genesis of this book was an in-person event, but now that the book is in your hands, we want you to contribute to it. OpenStack documentation follows the coding principles of iterative work, with bug logging, investigating, and fixing. We also store the source content on GitHub and invite collaborators through the OpenStack Gerrit installation, which offers reviews. For the O'Reilly edition of this book, we are using the company's Atlas system, which also stores source content on GitHub and enables collaboration among contributors."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:754(para)
msgid "Learn more about how to contribute to the OpenStack docs at <link href=\"https://wiki.openstack.org/wiki/Documentation/HowTo\">Documentation How To</link>."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:758(para)
msgid "If you find a bug and can't fix it or aren't sure it's really a doc bug, log a bug at <link href=\"https://bugs.launchpad.net/openstack-manuals\">OpenStack Manuals</link>. Tag the bug under <guilabel>Extra</guilabel> options with the <literal>ops-guide</literal> tag to indicate that the bug is in this guide. You can assign the bug to yourself if you know how to fix it. Also, a member of the OpenStack doc-core team can triage the doc bug."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:771(title)
msgid "Conventions Used in This Book"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:773(para)
msgid "The following typographical conventions are used in this book:"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:778(emphasis)
msgid "Italic"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:781(para)
msgid "Indicates new terms, URLs, email addresses, filenames, and file extensions."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:787(literal)
msgid "Constant width"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:790(para)
msgid "Used for program listings, as well as within paragraphs to refer to program elements such as variable or function names, databases, data types, environment variables, statements, and keywords."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:801(para)
msgid "Shows commands or other text that should be typed literally by the user."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:807(replaceable)
msgid "Constant width italic"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:810(para)
msgid "Shows text that should be replaced with user-supplied values or by values determined by context."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:816(term)
msgid "Command prompts"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:819(para)
msgid "Commands prefixed with the <literal>#</literal> prompt should be executed by the <literal>root</literal> user. These examples can also be executed using the <literal>sudo</literal> command, if available."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:824(para)
msgid "Commands prefixed with the <literal>$</literal> prompt can be executed by any user, including <literal>root</literal>."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:832(para)
msgid "This element signifies a tip or suggestion."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:836(para)
msgid "This element signifies a general note."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:840(para)
msgid "This element indicates a warning or caution."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:15(title)
msgid "Scaling"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:17(para)
msgid "Whereas traditional applications required larger hardware to scale (\"vertical scaling\"), cloud-based applications typically request more, discrete hardware (\"horizontal scaling\"). If your cloud is successful, eventually you must add resources to meet the increasing demand.<indexterm class=\"singular\"><primary>scaling</primary><secondary>vertical vs. horizontal</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:27(para)
msgid "To suit the cloud paradigm, OpenStack itself is designed to be horizontally scalable. Rather than switching to larger servers, you procure more servers and simply install identically configured services. Ideally, you scale out and load balance among groups of functionally identical services (for example, compute nodes or <literal>nova-api</literal> nodes), that communicate on a message bus."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:35(title)
msgid "The Starting Point"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:37(para)
msgid "Determining the scalability of your cloud and how to improve it is an exercise with many variables to balance. No one solution meets everyone's scalability goals. However, it is helpful to track a number of metrics. Since you can define virtual hardware templates, called \"flavors\" in OpenStack, you can start to make scaling decisions based on the flavors you'll provide. These templates define sizes for memory in RAM, root disk size, amount of ephemeral data disk space available, and number of cores for starters.<indexterm class=\"singular\"><primary>virtual machine (VM)</primary></indexterm><indexterm class=\"singular\"><primary>hardware</primary><secondary>virtual hardware</secondary></indexterm><indexterm class=\"singular\"><primary>flavor</primary></indexterm><indexterm class=\"singular\"><primary>scaling</primary><secondary>metrics for</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:58(para)
msgid "The default OpenStack flavors are shown in <xref linkend=\"os-flavors-table\"/>."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:64(caption)
msgid "OpenStack default flavors"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:80(th)
msgid "Virtual cores"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:82(th)
msgid "Memory"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:92(para)
msgid "m1.tiny"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:94(para) ./doc/openstack-ops/ch_arch_scaling.xml:106(para) ./doc/openstack-ops/ch_ops_maintenance.xml:766(para)
msgid "1"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:96(para)
msgid "512 MB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:98(para)
msgid "1 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:100(para)
msgid "0 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:104(para)
msgid "m1.small"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:108(para)
msgid "2 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:110(para) ./doc/openstack-ops/ch_arch_scaling.xml:122(para) ./doc/openstack-ops/ch_arch_scaling.xml:134(para) ./doc/openstack-ops/ch_arch_scaling.xml:146(para)
msgid "10 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:112(para)
msgid "20 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:116(para)
msgid "m1.medium"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:118(para) ./doc/openstack-ops/ch_ops_maintenance.xml:772(para)
msgid "2"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:120(para)
msgid "4 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:124(para)
msgid "40 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:128(para)
msgid "m1.large"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:130(para) ./doc/openstack-ops/ch_ops_maintenance.xml:785(para)
msgid "4"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:132(para)
msgid "8 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:136(para)
msgid "80 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:140(para)
msgid "m1.xlarge"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:142(para)
msgid "8"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:144(para)
msgid "16 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:148(para)
msgid "160 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:156(para)
msgid "The number of virtual machines (VMs) you expect to run, <code>((overcommit fraction </code>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:162(para)
msgid "How much storage is required <code>(flavor disk size </code>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:153(para)
msgid "The starting point for most is the core count of your cloud. By applying some ratios, you can gather information about: <placeholder-1/> You can use these ratios to determine how much additional infrastructure you need to support your cloud."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:168(para)
msgid "Here is an example using the ratios for gathering scalability information for the number of VMs expected as well as the storage needed. The following numbers support (200 / 2) 16 = 1600 VM instances and require 80 TB of storage for /var/lib/nova/instances:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:175(para)
msgid "200 physical cores."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:179(para)
msgid "Most instances are size m1.medium (two virtual cores, 50 GB of storage)."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:184(para)
msgid "Default CPU overcommit ratio (<code>cpu_allocation_ratio</code> in nova.conf) of 16:1."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:189(para)
msgid "However, you need more than the core count alone to estimate the load that the API services, database servers, and queue servers are likely to encounter. You must also consider the usage patterns of your cloud."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:194(para)
msgid "As a specific example, compare a cloud that supports a managed web-hosting platform with one running integration tests for a development project that creates one VM per code commit. In the former, the heavy work of creating a VM happens only every few months, whereas the latter puts constant heavy load on the cloud controller. You must consider your average VM lifetime, as a larger number generally means less load on the cloud controller.<indexterm class=\"singular\"><primary>cloud controllers</primary><secondary>scalability and</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:206(para)
msgid "Aside from the creation and termination of VMs, you must consider the impact of users accessing the service—particularly on <literal>nova-api</literal> and its associated database. Listing instances garners a great deal of information and, given the frequency with which users run this operation, a cloud with a large number of users can increase the load significantly. This can occur even without their knowledge—leaving the OpenStack dashboard instances tab open in the browser refreshes the list of VMs every 30 seconds."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:215(para)
msgid "After you consider these factors, you can determine how many cloud controller cores you require. A typical eight core, 8 GB of RAM server is sufficient for up to a rack of compute nodes — given the above caveats."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:220(para)
msgid "You must also consider key hardware specifications for the performance of user VMs, as well as budget and performance needs, including storage performance (spindles/core), memory availability (RAM/core), network bandwidth<indexterm class=\"singular\"><primary>bandwidth</primary><secondary>hardware specifications and</secondary></indexterm> (Gbps/core), and overall CPU performance (CPU/core)."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:230(para)
msgid "For a discussion of metric tracking, including how to extract metrics from your cloud, see <xref linkend=\"logging_monitoring\"/>."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:237(title)
msgid "Adding Cloud Controller Nodes"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:239(para)
msgid "You can facilitate the horizontal expansion of your cloud by adding nodes. Adding compute nodes is straightforward—they are easily picked up by the existing installation. However, you must consider some important points when you design your cluster to be highly available.<indexterm class=\"singular\"><primary>compute nodes</primary><secondary>adding</secondary></indexterm><indexterm class=\"singular\"><primary>high availability</primary></indexterm><indexterm class=\"singular\"><primary>configuration options</primary><secondary>high availability</secondary></indexterm><indexterm class=\"singular\"><primary>cloud controller nodes</primary><secondary>adding</secondary></indexterm><indexterm class=\"singular\"><primary>scaling</primary><secondary>adding cloud controller nodes</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:263(para)
msgid "Recall that a cloud controller node runs several different services. You can install services that communicate only using the message queue internally—<code>nova-scheduler</code> and <code>nova-console</code>—on a new server for expansion. However, other integral parts require more care."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:269(para)
msgid "You should load balance user-facing services such as dashboard, <code>nova-api</code>, or the Object Storage proxy. Use any standard HTTP load-balancing method (DNS round robin, hardware load balancer, or software such as Pound or HAProxy). One caveat with dashboard is the VNC proxy, which uses the WebSocket protocol—something that an L7 load balancer might struggle with. See also <link href=\"http://docs.openstack.org/developer/horizon/topics/deployment.html#session-storage\" title=\"Horizon session storage\">Horizon session storage</link>."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:279(para)
msgid "You can configure some services, such as <code>nova-api</code> and <code>glance-api</code>, to use multiple processes by changing a flag in their configuration file—allowing them to share work between multiple cores on the one machine."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:285(para)
msgid "Several options are available for MySQL load balancing, and the supported AMQP brokers have built-in clustering support. Information on how to configure these and many of the other services can be found in <xref linkend=\"operations\" xrefstyle=\"part-num-title\"/>.<indexterm class=\"singular\"><primary>Advanced Message Queuing Protocol (AMQP)</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:296(title)
msgid "Segregating Your Cloud"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:298(para)
msgid "When you want to offer users different regions to provide legal considerations for data storage, redundancy across earthquake fault lines, or for low-latency API calls, you segregate your cloud. Use one of the following OpenStack methods to segregate your cloud: <emphasis>cells</emphasis>, <emphasis>regions</emphasis>, <emphasis>availability zones</emphasis>, or <emphasis>host aggregates</emphasis>.<indexterm class=\"singular\"><primary>segregation methods</primary></indexterm><indexterm class=\"singular\"><primary>scaling</primary><secondary>cloud segregation</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:312(para)
msgid "Each method provides different functionality and can be best divided into two groups:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:317(para)
msgid "Cells and regions, which segregate an entire cloud and result in running separate Compute deployments."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:322(para)
msgid "<glossterm baseform=\"Availability zone\">Availability zones</glossterm> and host aggregates, which merely divide a single Compute deployment."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:328(para)
msgid "<xref linkend=\"segragation_methods\"/> provides a comparison view of each segregation method currently provided by OpenStack Compute.<indexterm class=\"singular\"><primary>endpoints</primary><secondary>API endpoint</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:337(caption)
msgid "OpenStack segregation methods"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:343(th)
msgid "Cells"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:345(th)
msgid "Regions"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:347(th)
msgid "Availability zones"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:349(th)
msgid "Host aggregates"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:355(emphasis)
msgid "Use when you need"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:358(para)
msgid "A single <glossterm>API endpoint</glossterm> for compute, or you require a second level of scheduling."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:361(para)
msgid "Discrete regions with separate API endpoints and no coordination between regions."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:364(para)
msgid "Logical separation within your nova deployment for physical isolation or redundancy."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:367(para)
msgid "To schedule a group of hosts with common features."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:372(emphasis)
msgid "Example"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:374(para)
msgid "A cloud with multiple sites where you can schedule VMs \"anywhere\" or on a particular site."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:377(para)
msgid "A cloud with multiple sites, where you schedule VMs to a particular site and you want a shared infrastructure."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:380(para)
msgid "A single-site cloud with equipment fed by separate power supplies."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:383(para)
msgid "Scheduling to hosts with trusted hardware support."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:388(emphasis)
msgid "Overhead"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:390(para)
msgid "Considered experimental."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:390(para)
msgid "A new service, nova-cells."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:391(para)
msgid "Each cell has a full nova installation except nova-api."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:394(para)
msgid "A different API endpoint for every region."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:395(para)
msgid "Each region has a full nova installation."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:398(para) ./doc/openstack-ops/ch_arch_scaling.xml:400(para)
msgid "Configuration changes to <filename>nova.conf</filename>."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:404(emphasis)
msgid "Shared services"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:407(para) ./doc/openstack-ops/ch_arch_scaling.xml:409(para) ./doc/openstack-ops/ch_arch_scaling.xml:411(para) ./doc/openstack-ops/ch_arch_scaling.xml:413(para)
msgid "Keystone"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:407(code)
msgid "nova-api"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:411(para) ./doc/openstack-ops/ch_arch_scaling.xml:413(para)
msgid "All nova services"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:419(title)
msgid "Cells and Regions"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:421(para)
msgid "OpenStack Compute cells are designed to allow running the cloud in a distributed fashion without having to use more complicated technologies, or be invasive to existing nova installations. Hosts in a cloud are partitioned into groups called <emphasis>cells</emphasis>. Cells are configured in a tree. The top-level cell (\"API cell\") has a host that runs the <code>nova-api</code> service, but no <code>nova-compute</code> services. Each child cell runs all of the other typical <code>nova-*</code> services found in a regular installation, except for the <code>nova-api</code> service. Each cell has its own message queue and database service and also runs <code>nova-cells</code>, which manages the communication between the API cell and child cells.<indexterm class=\"singular\"><primary>scaling</primary><secondary>cells and regions</secondary></indexterm><indexterm class=\"singular\"><primary>cells</primary><secondary>cloud segregation</secondary></indexterm><indexterm class=\"singular\"><primary>region</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:444(para)
msgid "This allows for a single API server being used to control access to multiple cloud installations. Introducing a second level of scheduling (the cell selection), in addition to the regular <code>nova-scheduler</code> selection of hosts, provides greater flexibility to control where virtual machines are run."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:450(para)
msgid "Unlike having a single API endpoint, regions have a separate API endpoint per installation, allowing for a more discrete separation. Users wanting to run instances across sites have to explicitly select a region. However, the additional complexity of a running a new service is not required."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:456(para)
msgid "The OpenStack dashboard (horizon) can be configured to use multiple regions. This can be configured through the <placeholder-1/> parameter."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:461(title)
msgid "Availability Zones and Host Aggregates"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:463(para)
msgid "You can use availability zones, host aggregates, or both to partition a nova deployment.<indexterm class=\"singular\"><primary>scaling</primary><secondary>availability zones</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:470(para)
msgid "Availability zones are implemented through and configured in a similar way to host aggregates."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:473(para)
msgid "However, you use them for different reasons."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:476(title)
msgid "Availability zone"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:478(para)
msgid "This enables you to arrange OpenStack compute hosts into logical groups and provides a form of physical isolation and redundancy from other availability zones, such as by using a separate power supply or network equipment.<indexterm class=\"singular\"><primary>availability zone</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:485(para)
msgid "You define the availability zone in which a specified compute host resides locally on each server. An availability zone is commonly used to identify a set of servers that have a common attribute. For instance, if some of the racks in your data center are on a separate power source, you can put servers in those racks in their own availability zone. Availability zones can also help separate different classes of hardware."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:493(para)
msgid "When users provision resources, they can specify from which availability zone they want their instance to be built. This allows cloud consumers to ensure that their application resources are spread across disparate machines to achieve high availability in the event of hardware failure."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:501(title)
msgid "Host aggregates zone"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:503(para)
msgid "This enables you to partition OpenStack Compute deployments into logical groups for load balancing and instance distribution. You can use host aggregates to further partition an availability zone. For example, you might use host aggregates to partition an availability zone into groups of hosts that either share common resources, such as storage and network, or have a special property, such as trusted computing hardware.<indexterm class=\"singular\"><primary>scaling</primary><secondary>host aggregate</secondary></indexterm><indexterm class=\"singular\"><primary>host aggregate</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:517(para)
msgid "A common use of host aggregates is to provide information for use with the <literal>nova-scheduler</literal>. For example, you might use a host aggregate to group a set of hosts that share specific flavors or images."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:522(para)
msgid "The general case for this is setting key-value pairs in the aggregate metadata and matching key-value pairs in flavor's <parameter>extra_specs</parameter> metadata. The <parameter>AggregateInstanceExtraSpecsFilter</parameter> in the filter scheduler will enforce that instances be scheduled only on hosts in aggregates that define the same key to the same value."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:528(para)
msgid "An advanced use of this general concept allows different flavor types to run with different CPU and RAM allocation ratios so that high-intensity computing loads and low-intensity development and testing systems can share the same cloud without either starving the high-use systems or wasting resources on low-utilization systems. This works by setting <parameter>metadata</parameter> in your host aggregates and matching <parameter>extra_specs</parameter> in your flavor types."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:537(para)
msgid "The first step is setting the aggregate metadata keys <parameter>cpu_allocation_ratio</parameter> and <parameter>ram_allocation_ratio</parameter> to a floating-point value. The filter schedulers <parameter>AggregateCoreFilter</parameter> and <parameter>AggregateRamFilter</parameter> will use those values rather than the global defaults in <filename>nova.conf</filename> when scheduling to hosts in the aggregate. It is important to be cautious when using this feature, since each host can be in multiple aggregates but should have only one allocation ratio for each resources. It is up to you to avoid putting a host in multiple aggregates that define different values for the same <phrase role=\"keep-together\">resource</phrase>."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:551(para)
msgid "This is the first half of the equation. To get flavor types that are guaranteed a particular ratio, you must set the <parameter>extra_specs</parameter> in the flavor type to the key-value pair you want to match in the aggregate. For example, if you define <parameter>extra_specs</parameter><parameter>cpu_allocation_ratio</parameter> to \"1.0\", then instances of that type will run in aggregates only where the metadata key <parameter>cpu_allocation_ratio</parameter> is also defined as \"1.0.\" In practice, it is better to define an additional key-value pair in the aggregate metadata to match on rather than match directly on <parameter>cpu_allocation_ratio</parameter> or <parameter>core_allocation_ratio</parameter>. This allows better abstraction. For example, by defining a key <parameter>overcommit</parameter> and setting a value of \"high,\" \"medium,\" or \"low,\" you could then tune the numeric allocation ratios in the aggregates without also needing to change all flavor types relating to them."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:570(para)
msgid "Previously, all services had an availability zone. Currently, only the <literal>nova-compute</literal> service has its own availability zone. Services such as <literal>nova-scheduler</literal>, <literal>nova-network</literal>, and <literal>nova-conductor</literal> have always spanned all availability zones."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:581(para)
msgid "nova host-list (os-hosts)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:585(para)
msgid "euca-describe-availability-zones verbose"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:589(para)
msgid "<literal>nova-manage</literal> service list"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:577(para)
msgid "When you run any of the following operations, the services appear in their own internal availability zone (CONF.internal_service_availability_zone): <placeholder-1/>The internal availability zone is hidden in euca-describe-availability_zones (nonverbose)."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:594(para)
msgid "CONF.node_availability_zone has been renamed to CONF.default_availability_zone and is used only by the <literal>nova-api</literal> and <literal>nova-scheduler</literal> services."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:599(para)
msgid "CONF.node_availability_zone still works but is deprecated."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:607(title)
msgid "Scalable Hardware"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:609(para)
msgid "While several resources already exist to help with deploying and installing OpenStack, it's very important to make sure that you have your deployment planned out ahead of time. This guide presumes that you have at least set aside a rack for the OpenStack cloud but also offers suggestions for when and what to scale."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:616(title)
msgid "Hardware Procurement"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:618(para)
msgid "“The Cloud” has been described as a volatile environment where servers can be created and terminated at will. While this may be true, it does not mean that your servers must be volatile. Ensuring that your clouds hardware is stable and configured correctly means that your cloud environment remains up and running. Basically, put effort into creating a stable hardware environment so that you can host a cloud that users may treat as unstable and volatile.<indexterm class=\"singular\"><primary>servers</primary><secondary>avoiding volatility in</secondary></indexterm><indexterm class=\"singular\"><primary>hardware</primary><secondary>scalability planning</secondary></indexterm><indexterm class=\"singular\"><primary>scaling</primary><secondary>hardware procurement</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:638(para)
msgid "OpenStack can be deployed on any hardware supported by an OpenStack-compatible Linux distribution."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:641(para)
msgid "Hardware does not have to be consistent, but it should at least have the same type of CPU to support instance migration."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:644(para)
msgid "The typical hardware recommended for use with OpenStack is the standard value-for-money offerings that most hardware vendors stock. It should be straightforward to divide your procurement into building blocks such as \"compute,\" \"object storage,\" and \"cloud controller,\" and request as many of these as you need. Alternatively, should you be unable to spend more, if you have existing servers—provided they meet your performance requirements and virtualization technology—they are quite likely to be able to support OpenStack."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:655(title)
msgid "Capacity Planning"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:657(para)
msgid "OpenStack is designed to increase in size in a straightforward manner. Taking into account the considerations that we've mentioned in this chapter—particularly on the sizing of the cloud controller—it should be possible to procure additional compute or object storage nodes as needed. New nodes do not need to be the same specification, or even vendor, as existing nodes.<indexterm class=\"singular\"><primary>capability</primary><secondary>scaling and</secondary></indexterm><indexterm class=\"singular\"><primary>weight</primary></indexterm><indexterm class=\"singular\"><primary>capacity planning</primary></indexterm><indexterm class=\"singular\"><primary>scaling</primary><secondary>capacity planning</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:676(para)
msgid "For compute nodes, <code>nova-scheduler</code> will take care of differences in sizing having to do with core count and RAM amounts; however, you should consider that the user experience changes with differing CPU speeds. When adding object storage nodes, a <glossterm>weight</glossterm> should be specified that reflects the <glossterm>capability</glossterm> of the node."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:683(para)
msgid "Monitoring the resource usage and user growth will enable you to know when to procure. <xref linkend=\"logging_monitoring\"/> details some useful metrics."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:689(title)
msgid "Burn-in Testing"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:691(para)
msgid "The chances of failure for the servers hardware are high at the start and the end of its life. As a result, dealing with hardware failures while in production can be avoided by appropriate burn-in testing to attempt to trigger the early-stage failures. The general principle is to stress the hardware to its limits. Examples of burn-in tests include running a CPU or disk benchmark for several days.<indexterm class=\"singular\"><primary>testing</primary><secondary>burn-in testing</secondary></indexterm><indexterm class=\"singular\"><primary>troubleshooting</primary><secondary>burn-in testing</secondary></indexterm><indexterm class=\"singular\"><primary>burn-in testing</primary></indexterm><indexterm class=\"singular\"><primary>scaling</primary><secondary>burn-in testing</secondary></indexterm>"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/ch_arch_storage.xml:436(None) ./doc/openstack-ops/ch_arch_storage.xml:453(None) ./doc/openstack-ops/ch_arch_storage.xml:466(None) ./doc/openstack-ops/ch_arch_storage.xml:473(None) ./doc/openstack-ops/ch_arch_storage.xml:486(None) ./doc/openstack-ops/ch_arch_storage.xml:493(None) ./doc/openstack-ops/ch_arch_storage.xml:500(None) ./doc/openstack-ops/ch_arch_storage.xml:513(None) ./doc/openstack-ops/ch_arch_storage.xml:520(None) ./doc/openstack-ops/ch_arch_storage.xml:533(None) ./doc/openstack-ops/ch_arch_storage.xml:544(None) ./doc/openstack-ops/ch_arch_storage.xml:550(None)
msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/Check_mark_23x20_02.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:12(title)
msgid "Storage Decisions"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:14(para)
msgid "Storage is found in many parts of the OpenStack stack, and the differing types can cause confusion to even experienced cloud engineers. This section focuses on persistent storage options you can configure with your cloud. It's important to understand the distinction between <glossterm baseform=\"ephemeral volume\"> ephemeral</glossterm> storage and <glossterm baseform=\"persistent volume\"> persistent</glossterm> storage."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:22(title)
msgid "Ephemeral Storage"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:24(para)
msgid "If you deploy only the OpenStack Compute Service (nova), your users do not have access to any form of persistent storage by default. The disks associated with VMs are \"ephemeral,\" meaning that (from the user's point of view) they effectively disappear when a virtual machine is terminated.<indexterm class=\"singular\"><primary>storage</primary><secondary>ephemeral</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:36(title)
msgid "Persistent Storage"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:38(para)
msgid "Persistent storage means that the storage resource outlives any other resource and is always available, regardless of the state of a running instance."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:42(para)
msgid "Today, OpenStack clouds explicitly support two types of persistent storage: <emphasis>object storage</emphasis> and <emphasis>block storage</emphasis>.<indexterm class=\"singular\"><primary>swift</primary><secondary>Object Storage API</secondary></indexterm><indexterm class=\"singular\"><primary>persistent storage</primary></indexterm><indexterm class=\"singular\"><primary>objects</primary><secondary>persistent storage of</secondary></indexterm><indexterm class=\"singular\"><primary>Object Storage</primary><secondary>Object Storage API</secondary></indexterm><indexterm class=\"singular\"><primary>storage</primary><secondary>object storage</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:65(title) ./doc/openstack-ops/ch_ops_backup_recovery.xml:211(title)
msgid "Object Storage"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:67(para)
msgid "With object storage, users access binary objects through a REST API. You may be familiar with Amazon S3, which is a well-known example of an object storage system. Object storage is implemented in OpenStack by the OpenStack Object Storage (swift) project. If your intended users need to archive or manage large datasets, you want to provide them with object storage. In addition, OpenStack can store your virtual <phrase role=\"keep-together\">machine</phrase> (VM) images inside of an object storage system, as an alternative to storing the images on a file system.<indexterm class=\"singular\"><primary>binary</primary><secondary>binary objects</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:81(para)
msgid "OpenStack Object Storage provides a highly scalable, highly available storage solution by relaxing some of the constraints of traditional file systems. In designing and procuring for such a cluster, it is important to understand some key concepts about its operation. Essentially, this type of storage is built on the idea that all storage hardware fails, at every level, at some point. Infrequently encountered failures that would hamstring other storage systems, such as issues taking down RAID cards or entire servers, are handled gracefully with OpenStack Object Storage.<indexterm class=\"singular\"><primary>scaling</primary><secondary>Object Storage and</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:95(para)
msgid "A good document describing the Object Storage architecture is found within <link href=\"http://docs.openstack.org/developer/swift/overview_architecture.html\" title=\"OpenStack wiki\">the developer documentation</link>—read this first. Once you understand the architecture, you should know what a proxy server does and how zones work. However, some important points are often missed at first glance."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:102(para)
msgid "When designing your cluster, you must consider durability and availability. Understand that the predominant source of these is the spread and placement of your data, rather than the reliability of the hardware. Consider the default value of the number of replicas, which is three. This means that before an object is marked as having been written, at least two copies exist—in case a single server fails to write, the third copy may or may not yet exist when the write operation initially returns. Altering this number increases the robustness of your data, but reduces the amount of storage you have available. Next, look at the placement of your servers. Consider spreading them widely throughout your data center's network and power-failure zones. Is a zone a rack, a server, or a disk?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:128(para)
msgid "Among <glossterm>object</glossterm>, <glossterm>container</glossterm>, and <glossterm>account server</glossterm>s"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:134(para)
msgid "Between those servers and the proxies"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:138(para)
msgid "Between the proxies and your users"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:115(para)
msgid "Object Storage's network patterns might seem unfamiliar at first. Consider these main traffic flows: <indexterm class=\"singular\"><primary>objects</primary><secondary>storage decisions and</secondary></indexterm><indexterm class=\"singular\"><primary>containers</primary><secondary>storage decisions and</secondary></indexterm><indexterm class=\"singular\"><primary>account server</primary></indexterm><placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:142(para)
msgid "Object Storage is very \"chatty\" among servers hosting data—even a small cluster does megabytes/second of traffic, which is predominantly, “Do you have the object?”/“Yes I have the object!” Of course, if the answer to the aforementioned question is negative or the request times out, replication of the object begins."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:148(para)
msgid "Consider the scenario where an entire server fails and 24 TB of data needs to be transferred \"immediately\" to remain at three copies—this can put significant load on the network."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:154(para)
msgid "Another fact that's often forgotten is that when a new file is being uploaded, the proxy server must write out as many streams as there are replicas—giving a multiple of network traffic. For a three-replica cluster, 10 Gbps in means 30 Gbps out. Combining this with the previous high bandwidth<indexterm class=\"singular\"><primary>bandwidth</primary><secondary>private vs. public network recommendations</secondary></indexterm> demands of replication is what results in the recommendation that your private network be of significantly higher bandwidth than your public need be. Oh, and OpenStack Object Storage communicates internally with unencrypted, unauthenticated rsync for performance—you do want the private network to be private."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:168(para)
msgid "The remaining point on bandwidth is the public-facing portion. The <literal>swift-proxy</literal> service is stateless, which means that you can easily add more and use HTTP load-balancing methods to share bandwidth and availability between them."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:173(para)
msgid "More proxies means more bandwidth, if your storage can keep up."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:180(para)
msgid "Block storage (sometimes referred to as volume storage) provides users with access to block-storage devices. Users interact with block storage by attaching volumes to their running VM instances.<indexterm class=\"singular\"><primary>volume storage</primary></indexterm><indexterm class=\"singular\"><primary>block storage</primary></indexterm><indexterm class=\"singular\"><primary>storage</primary><secondary>block storage</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:193(para)
msgid "These volumes are persistent: they can be detached from one instance and re-attached to another, and the data remains intact. Block storage is implemented in OpenStack by the OpenStack Block Storage (cinder) project, which supports multiple backends in the form of drivers. Your choice of a storage backend must be supported by a Block Storage driver."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:200(para)
msgid "Most block storage drivers allow the instance to have direct access to the underlying storage hardware's block device. This helps increase the overall read/write IO. However, support for utilizing files as volumes is also well established, with full support for NFS, GlusterFS and others."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:206(para)
msgid "These drivers work a little differently than a traditional \"block\" storage driver. On an NFS or GlusterFS file system, a single file is created and then mapped as a \"virtual\" volume into the instance. This mapping/translation is similar to how OpenStack utilizes QEMU's file-based virtual machines stored in <code>/var/lib/nova/instances</code>."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:216(title)
msgid "OpenStack Storage Concepts"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:218(para)
msgid "<xref linkend=\"openstack_storage\"/> explains the different storage concepts provided by OpenStack.<indexterm class=\"singular\"><primary>block device</primary></indexterm><indexterm class=\"singular\"><primary>storage</primary><secondary>overview of concepts</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:228(caption)
msgid "OpenStack storage"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:234(th)
msgid "Ephemeral storage"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:236(th)
msgid "Block storage"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:244(para)
msgid "Used to…"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:246(para)
msgid "Run operating system and scratch space"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:248(para)
msgid "Add additional persistent storage to a virtual machine (VM)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:251(para)
msgid "Store data, including VM images"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:255(para)
msgid "Accessed through…"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:257(para)
msgid "A file system"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:259(para)
msgid "A <glossterm>block device</glossterm> that can be partitioned, formatted, and mounted (such as, /dev/vdc)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:262(para)
msgid "The REST API"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:266(para)
msgid "Accessible from…"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:268(para) ./doc/openstack-ops/ch_arch_storage.xml:270(para)
msgid "Within a VM"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:272(para)
msgid "Anywhere"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:276(para)
msgid "Managed by…"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:278(para)
msgid "OpenStack Compute (nova)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:280(para)
msgid "OpenStack Block Storage (cinder)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:286(para)
msgid "Persists until…"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:288(para)
msgid "VM is terminated"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:290(para) ./doc/openstack-ops/ch_arch_storage.xml:292(para)
msgid "Deleted by user"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:296(para)
msgid "Sizing determined by…"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:298(para)
msgid "Administrator configuration of size settings, known as <emphasis>flavors</emphasis>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:301(para)
msgid "User specification in initial request"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:303(para)
msgid "Amount of available physical storage"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:307(para)
msgid "Example of typical usage…"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:309(para)
msgid "10 GB first disk, 30 GB second disk"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:311(para)
msgid "1 TB disk"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:313(para)
msgid "10s of TBs of dataset storage"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:319(title)
msgid "File-level Storage (for Live Migration)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:321(para)
msgid "With file-level storage, users access stored data using the operating system's file system interface. Most users, if they have used a network storage solution before, have encountered this form of networked storage. In the Unix world, the most common form of this is NFS. In the Windows world, the most common form is called CIFS (previously, SMB).<indexterm class=\"singular\"><primary>migration</primary></indexterm><indexterm class=\"singular\"><primary>live migration</primary></indexterm><indexterm class=\"singular\"><primary>storage</primary><secondary>file-level</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:336(para)
msgid "OpenStack clouds do not present file-level storage to end users. However, it is important to consider file-level storage for storing instances under <code>/var/lib/nova/instances</code> when designing your cloud, since you must have a shared file system if you want to support live migration."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:345(title)
msgid "Choosing Storage Backends"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:347(para)
msgid "Users will indicate different needs for their cloud use cases. Some may need fast access to many objects that do not change often, or want to set a time-to-live (TTL) value on a file. Others may access only storage that is mounted with the file system itself, but want it to be replicated instantly when starting a new instance. For other systems, ephemeral storage—storage that is released when a VM attached to it is shut down— is the preferred way. When you select <glossterm>storage backend</glossterm>s, <indexterm class=\"singular\"><primary>storage</primary><secondary>choosing backends</secondary></indexterm><indexterm class=\"singular\"><primary>storage backend</primary></indexterm><indexterm class=\"singular\"><primary>backend interactions</primary><secondary>store</secondary></indexterm>ask the following questions on behalf of your users:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:368(para)
msgid "Do my users need block storage?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:372(para)
msgid "Do my users need object storage?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:376(para)
msgid "Do I need to support live migration?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:380(para)
msgid "Should my persistent storage drives be contained in my compute nodes, or should I use external storage?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:385(para)
msgid "What is the platter count I can achieve? Do more spindles result in better I/O despite network access?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:390(para)
msgid "Which one results in the best cost-performance scenario I'm aiming for?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:395(para)
msgid "How do I manage the storage operationally?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:399(para)
msgid "How redundant and distributed is the storage? What happens if a storage node fails? To what extent can it mitigate my data-loss disaster scenarios?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:405(para)
msgid "To deploy your storage by using only commodity hardware, you can use a number of open-source packages, as shown in <xref linkend=\"storage_solutions\"/>."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:410(caption)
msgid "Persistent file-based storage support"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:414(th) ./doc/openstack-ops/ch_arch_storage.xml:442(para) ./doc/openstack-ops/ch_arch_storage.xml:448(para) ./doc/openstack-ops/ch_arch_storage.xml:457(para) ./doc/openstack-ops/ch_arch_storage.xml:528(para) ./doc/openstack-ops/ch_arch_storage.xml:537(para)
msgid " "
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:416(th)
msgid "Object"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:418(th)
msgid "Block"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:421(para)
msgid "This list of open source file-level shared storage solutions is not exhaustive; other open source solutions exist (MooseFS). Your organization may already have deployed a file-level shared storage solution that you can use."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:420(th)
msgid "File-level<placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:431(para)
msgid "Swift"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:446(para)
msgid "LVM"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:461(para)
msgid "Ceph"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:477(para)
msgid "Experimental"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:481(para)
msgid "Gluster"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:506(para)
msgid "NFS"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:526(para)
msgid "ZFS"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:540(para)
msgid "Sheepdog"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:559(title)
msgid "Storage Driver Support"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:561(para)
msgid "In addition to the open source technologies, there are a number of proprietary solutions that are officially supported by OpenStack Block Storage.<indexterm class=\"singular\"><primary>storage</primary><secondary>storage driver support</secondary></indexterm> They are offered by the following vendors:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:571(para)
msgid "IBM (Storwize family/SVC, XIV)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:575(para)
msgid "NetApp"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:579(para)
msgid "Nexenta"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:583(para)
msgid "SolidFire"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:587(para)
msgid "You can find a matrix of the functionality provided by all of the supported Block Storage drivers on the <link href=\"https://wiki.openstack.org/wiki/CinderSupportMatrix\" title=\"OpenStack wiki\">OpenStack wiki</link>."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:593(para)
msgid "Also, you need to decide whether you want to support object storage in your cloud. The two common use cases for providing object storage in a compute cloud are:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:599(para)
msgid "To provide users with a persistent storage mechanism"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:603(para)
msgid "As a scalable, reliable data store for virtual machine images"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:609(title)
msgid "Commodity Storage Backend Technologies"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:611(para)
msgid "This section provides a high-level overview of the differences among the different commodity storage backend technologies. Depending on your cloud user's needs, you can implement one or many of these technologies in different combinations:<indexterm class=\"singular\"><primary>storage</primary><secondary>commodity storage</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:625(para)
msgid "The official OpenStack Object Store implementation. It is a mature technology that has been used for several years in production by Rackspace as the technology behind Rackspace Cloud Files. As it is highly scalable, it is well-suited to managing petabytes of storage. OpenStack Object Storage's advantages are better <phrase role=\"keep-together\">integration</phrase> with OpenStack (integrates with OpenStack Identity, works with the OpenStack dashboard interface) and better support for multiple data center deployment through support of asynchronous eventual consistency replication."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:636(para)
msgid "Therefore, if you eventually plan on distributing your storage cluster across multiple data centers, if you need unified accounts for your users for both compute and object storage, or if you want to control your object storage with the OpenStack dashboard, you should consider OpenStack Object Storage. More detail can be found about OpenStack Object Storage in the section below.<indexterm class=\"singular\"><primary>accounts</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:649(term)
msgid "Ceph<indexterm class=\"singular\"><primary>Ceph</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:654(para)
msgid "A scalable storage solution that replicates data across commodity storage nodes. Ceph was originally developed by one of the founders of DreamHost and is currently used in production there."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:659(para)
msgid "Ceph was designed to expose different types of storage interfaces to the end user: it supports object storage, block storage, and file-system interfaces, although the file-system interface is not yet considered production-ready. Ceph supports the same API as swift for object storage and can be used as a backend for cinder block storage as well as backend storage for glance images. Ceph supports \"thin provisioning,\" implemented using copy-on-write."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:668(para)
msgid "This can be useful when booting from volume because a new volume can be provisioned very quickly. Ceph also supports keystone-based authentication (as of version 0.56), so it can be a seamless swap in for the default OpenStack swift implementation."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:674(para)
msgid "Ceph's advantages are that it gives the administrator more fine-grained control over data distribution and replication strategies, enables you to consolidate your object and block storage, enables very fast provisioning of boot-from-volume instances using thin provisioning, and supports a distributed file-system interface, though this interface is <link href=\"http://ceph.com/docs/master/cephfs/\" title=\"OpenStack wiki\">not yet recommended</link> for use in production deployment by the Ceph project."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:684(para)
msgid "If you want to manage your object and block storage within a single system, or if you want to support fast boot-from-volume, you should consider Ceph."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:691(term)
msgid "Gluster<indexterm class=\"singular\"><primary>GlusterFS</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:696(para)
msgid "A distributed, shared file system. As of Gluster version 3.3, you can use Gluster to consolidate your object storage and file storage into one unified file and object storage solution, which is called Gluster For OpenStack (GFO). GFO uses a customized version of swift that enables Gluster to be used as the backend storage."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:703(para)
msgid "The main reason to use GFO rather than regular swift is if you also want to support a distributed file system, either to support shared storage live migration or to provide it as a separate service to your end users. If you want to manage your object and file storage within a single system, you should consider GFO."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:713(term)
msgid "LVM<indexterm class=\"singular\"><primary>LVM (Logical Volume Manager)</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:718(para)
msgid "The Logical Volume Manager is a Linux-based system that provides an abstraction layer on top of physical disks to expose logical volumes to the operating system. The LVM backend implements block storage as LVM logical partitions."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:723(para)
msgid "On each host that will house block storage, an administrator must initially create a volume group dedicated to Block Storage volumes. Blocks are created from LVM logical volumes."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:728(para)
msgid "LVM does <emphasis>not</emphasis> provide any replication. Typically, administrators configure RAID on nodes that use LVM as block storage to protect against failures of individual hard drives. However, RAID does not protect against a failure of the entire host."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:738(term)
msgid "ZFS<indexterm class=\"singular\"><primary>ZFS</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:743(para)
msgid "The Solaris iSCSI driver for OpenStack Block Storage implements blocks as ZFS entities. ZFS is a file system that also has the functionality of a volume manager. This is unlike on a Linux system, where there is a separation of volume manager (LVM) and file system (such as, ext3, ext4, xfs, and btrfs). ZFS has a number of advantages over ext4, including improved data-integrity checking."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:751(para)
msgid "The ZFS backend for OpenStack Block Storage supports only Solaris-based systems, such as Illumos. While there is a Linux port of ZFS, it is not included in any of the standard Linux distributions, and it has not been tested with OpenStack Block Storage. As with LVM, ZFS does not provide replication across hosts on its own; you need to add a replication solution on top of ZFS if your cloud needs to be able to handle storage-node failures."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:760(para)
msgid "We don't recommend ZFS unless you have previous experience with deploying it, since the ZFS backend for Block Storage requires a Solaris-based operating system, and we assume that your experience is primarily with Linux-based systems."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:768(term)
msgid "Sheepdog<indexterm class=\"singular\"><primary>Sheepdog</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:773(para)
msgid "Sheepdog is a userspace distributed storage system. Sheepdog scales to several hundred nodes, and has powerful virtual disk management features like snapshot, cloning, rollback, thin provisioning."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:777(para)
msgid "It is essentially an object storage system that manages disks and aggregates the space and performance of disks linearly in hyper scale on commodity hardware in a smart way. On top of its object store, Sheepdog provides elastic volume service and http service. Sheepdog does not assume anything about kernel version and can work nicely with xattr-supported file systems."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:791(para)
msgid "We hope that you now have some considerations in mind and questions to ask your future cloud users about their storage use cases. As you can see, your storage decisions will also influence your network design for performance and security needs. Continue with us to make more informed decisions about your OpenStack cloud <phrase role=\"keep-together\">design</phrase>."
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/ch_ops_projects_users.xml:110(None)
msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_0901.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/ch_ops_projects_users.xml:863(None)
msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_0902.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:12(title)
msgid "Managing Projects and Users"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:14(para)
msgid "An OpenStack cloud does not have much value without users. This chapter covers topics that relate to managing users, projects, and quotas. This chapter describes users and projects as described by version 2 of the OpenStack Identity API."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:20(para)
msgid "While version 3 of the Identity API is available, the client tools do not yet implement those calls, and most OpenStack clouds are still implementing Identity API v2.0.<indexterm class=\"singular\"><primary>Identity Service</primary><secondary>Identity Service API</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:30(title)
msgid "Projects or Tenants?"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:32(para)
msgid "In OpenStack user interfaces and documentation, a group of users is referred to as a <glossterm>project</glossterm> or <glossterm>tenant</glossterm>. These terms are interchangeable.<indexterm class=\"singular\"><primary>user management</primary><secondary>terminology for</secondary></indexterm><indexterm class=\"singular\"><primary>tenant</primary><secondary>definition of</secondary></indexterm><indexterm class=\"singular\"><primary>projects</primary><secondary>definition of</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:49(para)
msgid "The initial implementation of the OpenStack Compute Service (nova) had its own authentication system and used the term <literal>project</literal>. When authentication moved into the OpenStack Identity Service (keystone) project, it used the term <literal>tenant</literal> to refer to a group of users. Because of this legacy, some of the OpenStack tools refer to projects and some refer to tenants."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:58(para)
msgid "This guide uses the term <literal>project</literal>, unless an example shows interaction with a tool that uses the term <literal>tenant</literal>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:65(title)
msgid "Managing Projects"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:67(para)
msgid "Users must be associated with at least one project, though they may belong to many. Therefore, you should add at least one project before adding users.<indexterm class=\"singular\"><primary>user management</primary><secondary>adding projects</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:76(title)
msgid "Adding Projects"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:78(para)
msgid "To create a project through the OpenStack dashboard:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:82(para)
msgid "Log in as an administrative user."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:86(para)
msgid "Select the <guilabel>Admin</guilabel> tab in the left navigation bar."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:91(para)
msgid "Under Identity Panel, click <guilabel>Projects</guilabel>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:96(para)
msgid "Click the <guibutton>Create Project</guibutton> button."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:100(para)
msgid "You are prompted for a project name and an optional, but recommended, description. Select the checkbox at the bottom of the form to enable this project. By default, it is enabled, as shown in <xref linkend=\"horizon-add-project\"/>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:106(title)
msgid "Dashboard's Create Project form"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:115(para)
msgid "It is also possible to add project members and adjust the project quotas. We'll discuss those actions later, but in practice, it can be quite convenient to deal with all these operations at one time."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:119(para)
msgid "To add a project through the command line, you must use the keystone utility, which uses <literal>tenant</literal> in place of <literal>project</literal>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:125(para)
msgid "This command creates a project named \"demo.\" Optionally, you can add a description string by appending <code>--description <replaceable>tenant-description</replaceable></code>, which can be very useful. You can also create a group in a disabled state by appending <code>--enabled false</code> to the command. By default, projects are created in an enabled state."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:135(title)
msgid "Quotas"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:137(para)
msgid "To prevent system capacities from being exhausted without notification, you can set up <glossterm baseform=\"quota\">quotas</glossterm>. Quotas are operational limits. For example, the number of gigabytes allowed per tenant can be controlled to ensure that a single tenant cannot consume all of the disk space. Quotas are currently enforced at the tenant (or project) level, rather than the user level.<indexterm class=\"startofrange\" xml:id=\"quotas9\"><primary>quotas</primary></indexterm><indexterm class=\"singular\"><primary>user management</primary><secondary>quotas</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:152(para)
msgid "Because without sensible quotas a single tenant could use up all the available resources, default quotas are shipped with OpenStack. You should pay attention to which quota settings make sense for your hardware capabilities."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:158(para)
msgid "Using the command-line interface, you can manage quotas for the OpenStack Compute Service and the Block Storage Service."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:161(para)
msgid "Typically, default values are changed because a tenant requires more than the OpenStack default of 10 volumes per tenant, or more than the OpenStack default of 1 TB of disk space on a compute node."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:166(para)
msgid "To view all tenants, run: <placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:180(title)
msgid "Set Image Quotas"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:182(para)
msgid "OpenStack Havana introduced a basic quota feature for the Image service, so you can now restrict a project's image storage by total number of bytes. Currently, this quota is applied cloud-wide, so if you were to set an Image quota limit of 5 GB, then all projects in your cloud will be able to store only 5 GB of images and snapshots.<indexterm class=\"singular\"><primary>Image service</primary><secondary>quota setting</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:193(para)
msgid "To enable this feature, edit the <filename>/etc/glance/glance-api.conf</filename> file, and under the [DEFAULT] section, add:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:199(para)
msgid "For example, to restrict a project's image storage to 5 GB, do this:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:205(para)
msgid "In the Icehouse release, there is a configuration option in <filename>glance-api.conf</filename> that limits the number of members allowed per image, called <code>image_member_quota</code>, set to 128 by default. That setting is a different quota from the storage quota.<indexterm class=\"singular\"><primary>Icehouse</primary><secondary>image quotas</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:218(title)
msgid "Set Compute Service Quotas"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:220(para)
msgid "As an administrative user, you can update the Compute Service quotas for an existing tenant, as well as update the quota defaults for a new tenant.<indexterm class=\"singular\"><primary>Compute</primary><secondary>Compute Service</secondary></indexterm> See <xref linkend=\"compute-quota-table\"/>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:229(caption)
msgid "Compute quota descriptions"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:239(th)
msgid "Quota"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:243(th) ./doc/openstack-ops/ch_ops_projects_users.xml:611(th)
msgid "Property name"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:249(para)
msgid "Fixed IPs"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:251(para)
msgid "Number of fixed IP addresses allowed per tenant. This number must be equal to or greater than the number of allowed instances."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:255(systemitem)
msgid "fixed-ips"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:261(para)
msgid "Number of floating IP addresses allowed per tenant."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:264(systemitem)
msgid "floating-ips"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:268(para)
msgid "Injected file content bytes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:270(para)
msgid "Number of content bytes allowed per injected file."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:273(systemitem)
msgid "injected-file-content-bytes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:278(para)
msgid "Injected file path bytes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:280(para)
msgid "Number of bytes allowed per injected file path."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:283(systemitem)
msgid "injected-file-path-bytes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:288(para)
msgid "Injected files"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:290(para)
msgid "Number of injected files allowed per tenant."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:292(systemitem)
msgid "injected-files"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:298(para)
msgid "Number of instances allowed per tenant."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:304(para)
msgid "Key pairs"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:306(para)
msgid "Number of key pairs allowed per user."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:308(systemitem)
msgid "key-pairs"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:312(para)
msgid "Metadata items"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:314(para)
msgid "Number of metadata items allowed per instance."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:317(systemitem)
msgid "metadata-items"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:321(para)
msgid "RAM"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:323(para)
msgid "Megabytes of instance RAM allowed per tenant."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:326(systemitem)
msgid "ram"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:330(para)
msgid "Security group rules"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:332(para)
msgid "Number of rules per security group."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:334(systemitem)
msgid "security-group-rules"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:339(para)
msgid "Security groups"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:341(para)
msgid "Number of security groups per tenant."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:343(systemitem)
msgid "security-groups"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:349(para)
msgid "Number of instance cores allowed per tenant."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:351(systemitem)
msgid "cores"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:357(title)
msgid "View and update compute quotas for a tenant (project)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:359(para)
msgid "As an administrative user, you can use the <literal>nova quota-*</literal> commands, which are provided by the <literal>python-novaclient</literal> package, to view and update tenant quotas."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:365(title)
msgid "To view and update default quota values"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:368(para) ./doc/openstack-ops/ch_ops_projects_users.xml:656(para)
msgid "List all default quotas for all tenants, as follows:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:394(para)
msgid "Update a default value for a new tenant, as follows:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:396(replaceable)
msgid "value"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:407(title)
msgid "To view quota values for a tenant (project)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:410(para) ./doc/openstack-ops/ch_ops_projects_users.xml:704(para)
msgid "Place the tenant ID in a useable variable, as follows:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:413(replaceable) ./doc/openstack-ops/ch_ops_projects_users.xml:450(replaceable) ./doc/openstack-ops/ch_ops_projects_users.xml:685(replaceable) ./doc/openstack-ops/ch_ops_projects_users.xml:707(replaceable)
msgid "tenantName"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:417(para)
msgid "List the currently set quota values for a tenant, as follows:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:445(title)
msgid "To update quota values for a tenant (project)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:448(para)
msgid "Obtain the tenant ID, as follows:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:454(para) ./doc/openstack-ops/ch_ops_projects_users.xml:711(para)
msgid "Update a particular quota value, as follows:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:456(replaceable) ./doc/openstack-ops/ch_ops_projects_users.xml:713(replaceable)
msgid "quotaName"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:456(replaceable)
msgid "quotaValue"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:456(replaceable) ./doc/openstack-ops/ch_ops_projects_users.xml:713(replaceable)
msgid "tenantID"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:480(para)
msgid "To view a list of options for the <literal>quota-update</literal> command, run:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:491(title)
msgid "Set Object Storage Quotas"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:493(para)
msgid "Object Storage quotas were introduced in Swift 1.8 (OpenStack Grizzly). There are currently two categories of quotas for Object Storage:<indexterm class=\"singular\"><primary>account quotas</primary></indexterm><indexterm class=\"singular\"><primary>containers</primary><secondary>quota setting</secondary></indexterm><indexterm class=\"singular\"><primary>Object Storage</primary><secondary>quota setting</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:509(term)
msgid "Container quotas"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:512(para)
msgid "Limit the total size (in bytes) or number of objects that can be stored in a single container."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:518(term)
msgid "Account quotas"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:521(para)
msgid "Limit the total size (in bytes) that a user has available in the Object Storage <phrase role=\"keep-together\">service</phrase>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:528(para)
msgid "To take advantage of either container quotas or account quotas, your Object Storage proxy server must have <code>container_quotas</code> or <code>account_quotas</code> (or both) added to the <literal>[pipeline:main]</literal> pipeline. Each quota type also requires its own section in the <filename>proxy-server.conf</filename> file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:545(para)
msgid "To view and update Object Storage quotas, use the <code>swift</code> command provided by the <code>python-swiftclient</code> package. Any user included in the project can view the quotas placed on their project. To update Object Storage quotas on a project, you must have the role of ResellerAdmin in the project that the quota is being applied to."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:554(para)
msgid "To view account quotas placed on a project:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:567(para)
msgid "To apply or update account quotas on a project:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:572(para)
msgid "For example, to place a 5 GB quota on an account:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:577(para)
msgid "To verify the quota, run the <literal>swift stat</literal> command again:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:593(title)
msgid "Set Block Storage Quotas"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:595(para)
msgid "As an administrative user, you can update the Block Storage Service quotas for a tenant, as well as update the quota defaults for a new tenant. See <xref linkend=\"block-storage-quota-table\"/>.<indexterm class=\"singular\"><primary>Block Storage</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:603(caption)
msgid "Block Storage quota descriptions"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:619(para)
msgid "gigabytes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:621(para)
msgid "Number of volume gigabytes allowed per tenant"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:626(para)
msgid "snapshots"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:628(para)
msgid "Number of Block Storage snapshots allowed per tenant."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:633(para)
msgid "volumes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:635(para)
msgid "Number of Block Storage volumes allowed per tenant"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:644(title)
msgid "View and update Block Storage quotas for a tenant (project)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:647(para)
msgid "As an administrative user, you can use the <literal>cinder quota-*</literal> commands, which are provided by the <literal>python-cinderclient</literal> package, to view and update tenant quotas."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:653(title)
msgid "To view and update default Block Storage quota values"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:673(para)
msgid "To update a default value for a new tenant, update the property in the <filename>/etc/cinder/cinder.conf</filename> file."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:680(title)
msgid "To view Block Storage quotas for a tenant (project)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:683(para)
msgid "View quotas for the tenant, as follows:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:701(title)
msgid "To update Block Storage quotas for a tenant (project)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:713(replaceable)
msgid "NewValue"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:715(para)
msgid "For example:<indexterm class=\"endofrange\" startref=\"quotas9\"/>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:734(title)
msgid "User Management"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:736(para)
msgid "The command-line tools for managing users are inconvenient to use directly. They require issuing multiple commands to complete a single task, and they use UUIDs rather than symbolic names for many items. In practice, humans typically do not use these tools directly. Fortunately, the OpenStack dashboard provides a reasonable interface to this. In addition, many sites write custom tools for local needs to enforce local policies and provide levels of self-service to users that aren't currently available with packaged tools.<indexterm class=\"singular\"><primary>user management</primary><secondary>creating new users</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:751(title)
msgid "Creating New Users"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:753(para)
msgid "To create a user, you need the following information:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:757(para)
msgid "Username"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:761(para)
msgid "Email address"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:765(para)
msgid "Password"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:769(para)
msgid "Primary project"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:773(para)
msgid "Role"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:777(para)
msgid "Username and email address are self-explanatory, though your site may have local conventions you should observe. The primary project is simply the first project the user is associated with and must exist prior to creating the user. Role is almost always going to be \"member.\" Out of the box, OpenStack comes with two roles defined:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:788(term)
msgid "member"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:791(para)
msgid "A typical user"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:796(term)
msgid "admin"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:799(para)
msgid "An administrative super user, which has full permissions across all projects and should be used with great care"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:805(para)
msgid "It is possible to define other roles, but doing so is uncommon."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:808(para)
msgid "Once you've gathered this information, creating the user in the dashboard is just another web form similar to what we've seen before and can be found by clicking the Users link in the Admin navigation bar and then clicking the Create User button at the top right."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:813(para)
msgid "Modifying users is also done from this Users page. If you have a large number of users, this page can get quite crowded. The Filter search box at the top of the page can be used to limit the users listing. A form very similar to the user creation dialog can be pulled up by selecting Edit from the actions dropdown menu at the end of the line for the user you are modifying."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:822(title)
msgid "Associating Users with Projects"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:824(para)
msgid "Many sites run with users being associated with only one project. This is a more conservative and simpler choice both for administration and for users. Administratively, if a user reports a problem with an instance or quota, it is obvious which project this relates to. Users needn't worry about what project they are acting in if they are only in one project. However, note that, by default, any user can affect the resources of any other user within their project. It is also possible to associate users with multiple projects if that makes sense for your organization.<indexterm class=\"singular\"><primary>Project Members tab</primary></indexterm><indexterm class=\"singular\"><primary>user management</primary><secondary>associating users with projects</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:840(para)
msgid "Associating existing users with an additional project or removing them from an older project is done from the Projects page of the dashboard by selecting Modify Users from the Actions column, as shown in <xref linkend=\"horizon-edit-project\"/>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:845(para)
msgid "From this view, you can do a number of useful things, as well as a few dangerous ones."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:848(para)
msgid "The first column of this form, named All Users, includes a list of all the users in your cloud who are not already associated with this project. The second column shows all the users who are. These lists can be quite long, but they can be limited by typing a substring of the username you are looking for in the filter field at the top of the <phrase role=\"keep-together\">column</phrase>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:855(para)
msgid "From here, click the <guiicon>+</guiicon> icon to add users to the project. Click the <guiicon>-</guiicon> to remove them."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:859(title)
msgid "<guilabel>Edit Project Members</guilabel> tab"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:868(para)
msgid "The dangerous possibility comes with the ability to change member roles. This is the dropdown list below the username in the <guilabel>Project Members</guilabel> list. In virtually all cases, this value should be set to Member. This example purposefully shows an administrative user where this value is admin."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:875(para)
msgid "The admin is global, not per project, so granting a user the admin role in any project gives the user administrative rights across the whole cloud."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:880(para)
msgid "Typical use is to only create administrative users in a single project, by convention the admin project, which is created by default during cloud setup. If your administrative users also use the cloud to launch and manage instances, it is strongly recommended that you use separate user accounts for administrative access and normal operations and that they be in distinct projects.<indexterm class=\"singular\"><primary>accounts</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:890(title)
msgid "Customizing Authorization"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:892(para)
msgid "The default <glossterm>authorization</glossterm> settings allow administrative users only to create resources on behalf of a different project. OpenStack handles two kinds of authorization <phrase role=\"keep-together\">policies</phrase>:<indexterm class=\"singular\"><primary>authorization</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:903(term)
msgid "Operation based"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:906(para)
msgid "Policies specify access criteria for specific operations, possibly with fine-grained control over specific attributes."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:913(term)
msgid "Resource based"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:916(para)
msgid "Whether access to a specific resource might be granted or not according to the permissions configured for the resource (currently available only for the network resource). The actual authorization policies enforced in an OpenStack service vary from deployment to deployment."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:925(para)
msgid "The policy engine reads entries from the <code>policy.json</code> file. The actual location of this file might vary from distribution to distribution: for nova, it is typically in <code>/etc/nova/policy.json</code>. You can update entries while the system is running, and you do not have to restart services. Currently, the only way to update such policies is to edit the policy file."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:932(para)
msgid "The OpenStack service's policy engine matches a policy directly. A rule indicates evaluation of the elements of such policies. For instance, in a <code>compute:create: [[\"rule:admin_or_owner\"]]</code> statement, the policy is <code>compute:create</code>, and the rule is <code>admin_or_owner</code>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:938(para)
msgid "Policies are triggered by an OpenStack policy engine whenever one of them matches an OpenStack API operation or a specific attribute being used in a given operation. For instance, the engine tests the <code>create:compute</code> policy every time a user sends a <code>POST /v2/{tenant_id}/servers</code> request to the OpenStack Compute API server. Policies can be also related to specific <glossterm>API extension</glossterm>s. For instance, if a user needs an extension like <code>compute_extension:rescue</code>, the attributes defined by the provider extensions trigger the rule test for that operation."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:948(para)
msgid "An authorization policy can be composed by one or more rules. If more rules are specified, evaluation policy is successful if any of the rules evaluates successfully; if an API operation matches multiple policies, then all the policies must evaluate successfully. Also, authorization rules are recursive. Once a rule is matched, the rule(s) can be resolved to another rule, until a terminal rule is reached. These are the rules <phrase role=\"keep-together\">defined</phrase>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:958(term)
msgid "Role-based rules"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:961(para)
msgid "Evaluate successfully if the user submitting the request has the specified role. For instance, <code>\"role:admin\"</code> is successful if the user submitting the request is an administrator."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:969(term)
msgid "Field-based rules"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:972(para)
msgid "Evaluate successfully if a field of the resource specified in the current request matches a specific value. For instance, <code>\"field:networks:shared=True\"</code> is successful if the attribute shared of the network resource is set to <literal>true</literal>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:981(term)
msgid "Generic rules"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:984(para)
msgid "Compare an attribute in the resource with an attribute extracted from the user's security credentials and evaluates successfully if the comparison is successful. For instance, <code>\"tenant_id:%(tenant_id)s\"</code> is successful if the tenant identifier in the resource is equal to the tenant identifier of the user submitting the request."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:994(para)
msgid "Here are snippets of the default nova <filename>policy.json</filename> file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:1024(para)
msgid "Shows a rule that evaluates successfully if the current user is an administrator or the owner of the resource specified in the request (tenant identifier is equal)."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:1030(para)
msgid "Shows the default policy, which is always evaluated if an API operation does not match any of the policies in <code>policy.json</code>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:1036(para)
msgid "Shows a policy restricting the ability to manipulate flavors to administrators using the Admin API only.<indexterm class=\"singular\"><primary>admin API</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:1044(para)
msgid "In some cases, some operations should be restricted to administrators only. Therefore, as a further example, let us consider how this sample policy file could be modified in a scenario where we enable users to create their own flavors:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:1053(title)
msgid "Users Who Disrupt Other Users"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:1055(para)
msgid "Users on your cloud can disrupt other users, sometimes intentionally and maliciously and other times by accident. Understanding the situation allows you to make a better decision on how to handle the disruption.<indexterm class=\"singular\"><primary>user management</primary><secondary>handling disruptive users</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:1064(para)
msgid "For example, a group of users have instances that are utilizing a large amount of compute resources for very compute-intensive tasks. This is driving the load up on compute nodes and affecting other users. In this situation, review your user use cases. You may find that high compute scenarios are common, and should then plan for proper segregation in your cloud, such as host aggregation or regions."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:1071(para)
msgid "Another example is a user consuming a very large amount of bandwidth<indexterm class=\"singular\"><primary>bandwidth</primary><secondary>recognizing DDOS attacks</secondary></indexterm>. Again, the key is to understand what the user is doing. If she naturally needs a high amount of bandwidth, you might have to limit her transmission rate as to not affect other users or move her to an area with more bandwidth available. On the other hand, maybe her instance has been hacked and is part of a botnet launching DDOS attacks. Resolution of this issue is the same as though any other server on your network has been hacked. Contact the user and give her time to respond. If she doesn't respond, shut down the instance."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:1085(para)
msgid "A final example is if a user is hammering cloud resources repeatedly. Contact the user and learn what he is trying to do. Maybe he doesn't understand that what hes doing is inappropriate, or maybe there is an issue with the resource he is trying to access that is causing his requests to queue or lag."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:1096(para)
msgid "One key element of systems administration that is often overlooked is that end users are the reason systems administrators exist. Don't go the BOFH route and terminate every user who causes an alert to go off. Work with users to understand what they're trying to accomplish and see how your environment can better assist them in achieving their goals. Meet your users needs by organizing your users into projects, applying policies, managing quotas, and working with them.<indexterm class=\"singular\"><primary>systems administration</primary><see>user management</see></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:12(title)
msgid "Compute Nodes"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:14(para)
msgid "In this chapter, we discuss some of the choices you need to consider when building out your compute nodes. Compute nodes form the resource core of the OpenStack Compute cloud, providing the processing, memory, network and storage resources to run instances."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:20(title)
msgid "Choosing a CPU"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:22(para)
msgid "The type of CPU in your compute node is a very important choice. First, ensure that the CPU supports virtualization by way of <emphasis>VT-x</emphasis> for Intel chips and <emphasis>AMD-v</emphasis> for AMD chips.<indexterm class=\"singular\"><primary>CPUs (central processing units)</primary><secondary>choosing</secondary></indexterm><indexterm class=\"singular\"><primary>compute nodes</primary><secondary>CPU choice</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:36(para)
msgid "Consult the vendor documentation to check for virtualization support. For Intel, read <link href=\"http://www.intel.com/support/processors/sb/cs-030729.htm\" title=\"Intel VT-x\"> “Does my processor support Intel® Virtualization Technology?”</link>. For AMD, read <link href=\" http://www.amd.com/en-us/innovations/software-technologies/server-solution/virtualization\" title=\"AMD-v\"> AMD Virtualization</link>. Note that your CPU may support virtualization but it may be disabled. Consult your BIOS documentation for how to enable CPU features.<indexterm class=\"singular\"><primary>virtualization technology</primary></indexterm><indexterm class=\"singular\"><primary>AMD Virtualization</primary></indexterm><indexterm class=\"singular\"><primary>Intel Virtualization Technology</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:52(para)
msgid "The number of cores that the CPU has also affects the decision. It's common for current CPUs to have up to 12 cores. Additionally, if an Intel CPU supports hyperthreading, those 12 cores are doubled to 24 cores. If you purchase a server that supports multiple CPUs, the number of cores is further multiplied.<indexterm class=\"singular\"><primary>cores</primary></indexterm><indexterm class=\"singular\"><primary>hyperthreading</primary></indexterm><indexterm class=\"singular\"><primary>multithreading</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:67(title)
msgid "Multithread Considerations"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:69(para)
msgid "Hyper-Threading is Intel's proprietary simultaneous multithreading implementation used to improve parallelization on their CPUs. You might consider enabling Hyper-Threading to improve the performance of multithreaded applications."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:74(para)
msgid "Whether you should enable Hyper-Threading on your CPUs depends upon your use case. For example, disabling Hyper-Threading can be beneficial in intense computing environments. We recommend that you do performance testing with your local workload with both Hyper-Threading on and off to determine what is more appropriate in your case.<indexterm class=\"singular\"><primary>CPUs (central processing units)</primary><secondary>enabling hyperthreading on</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:88(title)
msgid "Choosing a Hypervisor"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:129(link)
msgid "LXC"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:134(link)
msgid "QEMU"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:139(link)
msgid "VMware ESX/ESXi"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:144(link)
msgid "Xen"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:149(link)
msgid "Hyper-V"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:154(link)
msgid "Docker"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:90(para)
msgid "A hypervisor provides software to manage virtual machine access to the underlying hardware. The hypervisor creates, manages, and monitors virtual machines.<indexterm class=\"singular\"><primary>Docker</primary></indexterm><indexterm class=\"singular\"><primary>Hyper-V</primary></indexterm><indexterm class=\"singular\"><primary>ESXi hypervisor</primary></indexterm><indexterm class=\"singular\"><primary>ESX hypervisor</primary></indexterm><indexterm class=\"singular\"><primary>VMware API</primary></indexterm><indexterm class=\"singular\"><primary>Quick EMUlator (QEMU)</primary></indexterm><indexterm class=\"singular\"><primary>Linux containers (LXC)</primary></indexterm><indexterm class=\"singular\"><primary>kernel-based VM (KVM) hypervisor</primary></indexterm><indexterm class=\"singular\"><primary>Xen API</primary><secondary>XenServer hypervisor</secondary></indexterm><indexterm class=\"singular\"><primary>hypervisors</primary><secondary>choosing</secondary></indexterm><indexterm class=\"singular\"><primary>compute nodes</primary><secondary>hypervisor choice</secondary></indexterm> OpenStack Compute supports many hypervisors to various degrees, including: <placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:158(para)
msgid "Probably the most important factor in your choice of hypervisor is your current usage or experience. Aside from that, there are practical concerns to do with feature parity, documentation, and the level of community experience."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:163(para)
msgid "For example, KVM is the most widely adopted hypervisor in the OpenStack community. Besides KVM, more deployments run Xen, LXC, VMware, and Hyper-V than the others listed. However, each of these are lacking some feature support or the documentation on how to use them with OpenStack is out of date."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:169(para)
msgid "The best information available to support your choice is found on the <link href=\"http://docs.openstack.org/developer/nova/support-matrix.html\" title=\"reference manual\">Hypervisor Support Matrix</link> and in the <link href=\"http://docs.openstack.org/juno/config-reference/content/section_compute-hypervisors.html\" title=\"configuration reference\">configuration reference</link>."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:177(para)
msgid "It is also possible to run multiple hypervisors in a single deployment using host aggregates or cells. However, an individual compute node can run only a single hypervisor at a time.<indexterm class=\"singular\"><primary>hypervisors</primary><secondary>running multiple</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:189(title)
msgid "Instance Storage Solutions"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:191(para)
msgid "As part of the procurement for a compute cluster, you must specify some storage for the disk on which the instantiated instance runs. There are three main approaches to providing this temporary-style storage, and it is important to understand the implications of the choice.<indexterm class=\"singular\"><primary>storage</primary><secondary>instance storage solutions</secondary></indexterm><indexterm class=\"singular\"><primary>instances</primary><secondary>storage solutions</secondary></indexterm><indexterm class=\"singular\"><primary>compute nodes</primary><secondary>instance storage solutions</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:209(para)
msgid "They are:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:213(para)
msgid "Off compute node storage—shared file system"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:217(para)
msgid "On compute node storage—shared file system"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:221(para)
msgid "On compute node storage—nonshared file system"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:225(para)
msgid "In general, the questions you should ask when selecting storage are as follows:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:230(para)
msgid "What is the platter count you can achieve?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:234(para)
msgid "Do more spindles result in better I/O despite network access?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:239(para)
msgid "Which one results in the best cost-performance scenario you're aiming for?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:244(para)
msgid "How do you manage the storage operationally?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:248(para)
msgid "Many operators use separate compute and storage hosts. Compute services and storage services have different requirements, and compute hosts typically require more CPU and RAM than storage hosts. Therefore, for a fixed budget, it makes sense to have different configurations for your compute nodes and your storage nodes. Compute nodes will be invested in CPU and RAM, and storage nodes will be invested in block storage."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:256(para)
msgid "However, if you are more restricted in the number of physical hosts you have available for creating your cloud and you want to be able to dedicate as many of your hosts as possible to running instances, it makes sense to run compute and storage on the same machines."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:261(para)
msgid "We'll discuss the three main approaches to instance storage in the next few sections."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:267(title)
msgid "Off Compute Node Storage—Shared File System"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:269(para)
msgid "In this option, the disks storing the running instances are hosted in servers outside of the compute nodes.<indexterm class=\"singular\"><primary>shared storage</primary></indexterm><indexterm class=\"singular\"><primary>file systems</primary><secondary>shared</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:278(para)
msgid "If you use separate compute and storage hosts, you can treat your compute hosts as \"stateless.\" As long as you don't have any instances currently running on a compute host, you can take it offline or wipe it completely without having any effect on the rest of your cloud. This simplifies maintenance for the compute hosts."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:284(para)
msgid "There are several advantages to this approach:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:288(para)
msgid "If a compute node fails, instances are usually easily recoverable."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:293(para)
msgid "Running a dedicated storage system can be operationally simpler."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:298(para)
msgid "You can scale to any number of spindles."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:302(para)
msgid "It may be possible to share the external storage for other purposes."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:307(para)
msgid "The main downsides to this approach are:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:311(para)
msgid "Depending on design, heavy I/O usage from some instances can affect unrelated instances."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:316(para) ./doc/openstack-ops/ch_arch_compute_nodes.xml:350(para)
msgid "Use of the network can decrease performance."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:322(title)
msgid "On Compute Node Storage—Shared File System"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:324(para)
msgid "In this option, each compute node is specified with a significant amount of disk space, but a distributed file system ties the disks from each compute node into a single mount."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:328(para)
msgid "The main advantage of this option is that it scales to external storage when you require additional storage."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:331(para)
msgid "However, this option has several downsides:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:335(para)
msgid "Running a distributed file system can make you lose your data locality compared with nonshared storage."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:340(para)
msgid "Recovery of instances is complicated by depending on multiple hosts."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:345(para) ./doc/openstack-ops/ch_arch_compute_nodes.xml:387(para)
msgid "The chassis size of the compute node can limit the number of spindles able to be used in a compute node."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:356(title)
msgid "On Compute Node Storage—Nonshared File System"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:358(para)
msgid "In this option, each compute node is specified with enough disks to store the instances it hosts.<indexterm class=\"singular\"><primary>file systems</primary><secondary>nonshared</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:365(para)
msgid "There are two main reasons why this is a good idea:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:369(para)
msgid "Heavy I/O usage on one compute node does not affect instances on other compute nodes."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:374(para)
msgid "Direct I/O access can increase performance."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:378(para)
msgid "This has several downsides:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:382(para)
msgid "If a compute node fails, the instances running on that node are lost."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:392(para)
msgid "Migrations of instances from one node to another are more complicated and rely on features that may not continue to be developed."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:398(para)
msgid "If additional storage is required, this option does not scale."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:403(para)
msgid "Running a shared file system on a storage system apart from the computes nodes is ideal for clouds where reliability and scalability are the most important factors. Running a shared file system on the compute nodes themselves may be best in a scenario where you have to deploy to preexisting servers for which you have little to no control over their specifications. Running a nonshared file system on the compute nodes themselves is a good option for clouds with high I/O requirements and low concern for reliability.<indexterm class=\"singular\"><primary>scaling</primary><secondary>file system choice</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:418(title)
msgid "Issues with Live Migration"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:420(para)
msgid "We consider live migration an integral part of the operations of the cloud. This feature provides the ability to seamlessly move instances from one physical host to another, a necessity for performing upgrades that require reboots of the compute hosts, but only works well with shared storage.<indexterm class=\"singular\"><primary>storage</primary><secondary>live migration</secondary></indexterm><indexterm class=\"singular\"><primary>migration</primary></indexterm><indexterm class=\"singular\"><primary>live migration</primary></indexterm><indexterm class=\"singular\"><primary>compute nodes</primary><secondary>live migration</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:438(para)
msgid "Live migration can also be done with nonshared storage, using a feature known as <emphasis>KVM live block migration</emphasis>. While an earlier implementation of block-based migration in KVM and QEMU was considered unreliable, there is a newer, more reliable implementation of block-based live migration as of QEMU 1.4 and libvirt 1.0.2 that is also compatible with OpenStack. However, none of the authors of this guide have first-hand experience using live block migration.<indexterm class=\"singular\"><primary>block migration</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:451(title)
msgid "Choice of File System"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:453(para)
msgid "If you want to support shared-storage live migration, you need to configure a distributed file system.<indexterm class=\"singular\"><primary>compute nodes</primary><secondary>file system choice</secondary></indexterm><indexterm class=\"singular\"><primary>file systems</primary><secondary>choice of</secondary></indexterm><indexterm class=\"singular\"><primary>storage</primary><secondary>file system choice</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:468(para)
msgid "Possible options include:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:472(para)
msgid "NFS (default for Linux)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:480(para)
msgid "MooseFS"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:484(para)
msgid "Lustre"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:488(para)
msgid "We've seen deployments with all, and recommend that you choose the one you are most familiar with operating. If you are not familiar with any of these, choose NFS, as it is the easiest to set up and there is extensive community knowledge about it."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:496(title)
msgid "Overcommitting"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:498(para)
msgid "OpenStack allows you to overcommit CPU and RAM on compute nodes. This allows you to increase the number of instances you can have running on your cloud, at the cost of reducing the performance of the instances.<indexterm class=\"singular\"><primary>RAM overcommit</primary></indexterm><indexterm class=\"singular\"><primary>CPUs (central processing units)</primary><secondary>overcommitting</secondary></indexterm><indexterm class=\"singular\"><primary>overcommitting</primary></indexterm><indexterm class=\"singular\"><primary>compute nodes</primary><secondary>overcommitting</secondary></indexterm> OpenStack Compute uses the following ratios by default:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:518(para)
msgid "CPU allocation ratio: 16:1"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:522(para)
msgid "RAM allocation ratio: 1.5:1"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:526(para)
msgid "The default CPU allocation ratio of 16:1 means that the scheduler allocates up to 16 virtual cores per physical core. For example, if a physical node has 12 cores, the scheduler sees 192 available virtual cores. With typical flavor definitions of 4 virtual cores per instance, this ratio would provide 48 instances on a physical node."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:532(para)
msgid "The formula for the number of virtual instances on a compute node is <emphasis>(OR*PC)/VC</emphasis>, where:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:537(emphasis)
msgid "OR"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:540(para)
msgid "CPU overcommit ratio (virtual cores per physical core)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:545(emphasis)
msgid "PC"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:548(para)
msgid "Number of physical cores"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:553(emphasis)
msgid "VC"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:556(para)
msgid "Number of virtual cores per instance"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:561(para)
msgid "Similarly, the default RAM allocation ratio of 1.5:1 means that the scheduler allocates instances to a physical node as long as the total amount of RAM associated with the instances is less than 1.5 times the amount of RAM available on the physical node."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:566(para)
msgid "For example, if a physical node has 48 GB of RAM, the scheduler allocates instances to that node until the sum of the RAM associated with the instances reaches 72 GB (such as nine instances, in the case where each instance has 8 GB of RAM)."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:571(para)
msgid "You must select the appropriate CPU and RAM allocation ratio for your particular use case."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:576(title)
msgid "Logging"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:578(para)
msgid "Logging is detailed more fully in <xref linkend=\"logging_monitoring\"/>. However, it is an important design consideration to take into account before commencing operations of your cloud.<indexterm class=\"singular\"><primary>logging/monitoring</primary><secondary>compute nodes and</secondary></indexterm><indexterm class=\"singular\"><primary>compute nodes</primary><secondary>logging</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:591(para)
msgid "OpenStack produces a great deal of useful logging information, however; but for the information to be useful for operations purposes, you should consider having a central logging server to send logs to, and a log parsing/analysis system (such as <phrase role=\"keep-together\">logstash</phrase>)."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:601(para)
msgid "Networking in OpenStack is a complex, multifaceted challenge. See <xref linkend=\"network_design\"/>.<indexterm class=\"singular\"><primary>compute nodes</primary><secondary>networking</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:612(para)
msgid "Compute nodes are the workhorse of your cloud and the place where your users' applications will run. They are likely to be affected by your decisions on what to deploy and how you deploy it. Their requirements should be reflected in the choices you make."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:12(title)
msgid "Lay of the Land"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:14(para)
msgid "This chapter helps you set up your working environment and use it to take a look around your cloud."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:18(title)
msgid "Using the OpenStack Dashboard for Administration"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:20(para)
msgid "As a cloud administrative user, you can use the OpenStack dashboard to create and manage projects, users, images, and flavors. Users are allowed to create and manage images within specified projects and to share images, depending on the Image service configuration. Typically, the policy configuration allows admin users only to set quotas and create and manage services. The dashboard provides an <guilabel>Admin</guilabel> tab with a <guilabel>System Panel</guilabel> and <guilabel>Identity Panel</guilabel>. These interfaces give you access to system information and usage as well as to settings for configuring what end users can do. Refer to the <link href=\"http://docs.openstack.org/user-guide-admin/dashboard.html\">OpenStack Admin User Guide</link> for detailed how-to information about using the dashboard as an admin user.<indexterm class=\"singular\"><primary>working environment</primary><secondary>dashboard</secondary></indexterm><indexterm class=\"singular\"><primary>dashboard</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:41(title)
msgid "Command-Line Tools"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:43(para)
msgid "We recommend using a combination of the OpenStack command-line interface (CLI) tools and the OpenStack dashboard for administration. Some users with a background in other cloud technologies may be using the EC2 Compatibility API, which uses naming conventions somewhat different from the native API. We highlight those differences.<indexterm class=\"singular\"><primary>working environment</primary><secondary>command-line tools</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:53(para)
msgid "We strongly suggest that you install the command-line clients from the <link href=\"https://pypi.python.org/pypi\">Python Package Index</link> (PyPI) instead of from the distribution packages. The clients are under heavy development, and it is very likely at any given time that the version of the packages distributed by your operating-system vendor are out of date.<indexterm class=\"singular\"><primary>command-line tools</primary><secondary>Python Package Index (PyPI)</secondary></indexterm><indexterm class=\"singular\"><primary>pip utility</primary></indexterm><indexterm class=\"singular\"><primary>Python Package Index (PyPI)</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:68(para)
msgid "The pip utility is used to manage package installation from the PyPI archive and is available in the python-pip package in most Linux distributions. Each OpenStack project has its own client, so depending on which services your site runs, install some or all of the following<indexterm class=\"singular\"><primary>neutron</primary><secondary>python-neutronclient</secondary></indexterm><indexterm class=\"singular\"><primary>swift</primary><secondary>python-swiftclient</secondary></indexterm><indexterm class=\"singular\"><primary>cinder</primary></indexterm><indexterm class=\"singular\"><primary>keystone</primary></indexterm><indexterm class=\"singular\"><primary>glance</primary><secondary>python-glanceclient</secondary></indexterm><indexterm class=\"singular\"><primary>nova</primary><secondary>python-novaclient</secondary></indexterm> packages:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:96(para)
msgid "python-novaclient (<glossterm>nova</glossterm> CLI)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:100(para)
msgid "python-glanceclient (<glossterm>glance</glossterm> CLI)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:104(para)
msgid "python-keystoneclient (<glossterm>keystone</glossterm> CLI)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:109(para)
msgid "python-cinderclient (<glossterm>cinder</glossterm> CLI)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:113(para)
msgid "python-swiftclient (<glossterm>swift</glossterm> CLI)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:117(para)
msgid "python-neutronclient (<glossterm>neutron</glossterm> CLI)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:122(title)
msgid "Installing the Tools"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:124(para)
msgid "To install (or upgrade) a package from the PyPI archive with pip, <indexterm class=\"singular\"><primary>command-line tools</primary><secondary>installing</secondary></indexterm>as root:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:133(para)
msgid "To remove the package:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:137(para)
msgid "If you need even newer versions of the clients, pip can install directly from the upstream git repository using the <code>-e</code> flag. You must specify a name for the Python egg that is installed. For example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:145(para)
msgid "If you support the EC2 API on your cloud, you should also install the euca2ools package or some other EC2 API tool so that you can get the same view your users have. Using EC2 API-based tools is mostly out of the scope of this guide, though we discuss getting credentials for use with it."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:153(title)
msgid "Administrative Command-Line Tools"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:155(para)
msgid "There are also several <literal>*-manage</literal> command-line tools. These are installed with the project's services on the cloud controller and do not need to be installed<indexterm class=\"singular\"><primary>*-manage command-line tools</primary></indexterm><indexterm class=\"singular\"><primary>command-line tools</primary><secondary>administrative</secondary></indexterm> separately:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:167(literal)
msgid "nova-manage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:171(literal)
msgid "glance-manage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:175(literal)
msgid "keystone-manage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:179(literal)
msgid "cinder-manage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:183(para)
msgid "Unlike the CLI tools mentioned above, the <code>*-manage</code> tools must be run from the cloud controller, as root, because they need read access to the config files such as <code>/etc/nova/nova.conf</code> and to make queries directly against the database rather than against the OpenStack <glossterm baseform=\"API endpoint\">API endpoints</glossterm>.<indexterm class=\"singular\"><primary>API (application programming interface)</primary><secondary>API endpoint</secondary></indexterm><indexterm class=\"singular\"><primary>endpoints</primary><secondary>API endpoint</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:199(para)
msgid "The existence of the <code>*-manage</code> tools is a legacy issue. It is a goal of the OpenStack project to eventually migrate all of the remaining functionality in the <code>*-manage</code> tools into the API-based tools. Until that day, you need to SSH into the <glossterm>cloud controller node</glossterm> to perform some maintenance operations that require one of the <phrase role=\"keep-together\"><code role=\"keep-together\">*-manage</code> tools</phrase>.<indexterm class=\"singular\"><primary>cloud controller nodes</primary><secondary>command-line tools and</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:215(title)
msgid "Getting Credentials"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:217(para)
msgid "You must have the appropriate credentials if you want to use the command-line tools to make queries against your OpenStack cloud. By far, the easiest way to obtain <glossterm>authentication</glossterm> credentials to use with command-line clients is to use the OpenStack dashboard. From the top-right navigation row, select <guimenuitem>Project</guimenuitem>, then <guimenuitem>Access &amp; Security</guimenuitem>, then <guimenuitem>API Access</guimenuitem> to access the user settings page where you can set your language and timezone preferences for the dashboard view. This action displays two buttons, <guilabel>Download OpenStack RC File</guilabel> and <guilabel>Download EC2 Credentials</guilabel>, which let you generate files that you can source in your shell to populate the environment variables the command-line tools require to know where your service endpoints and your authentication information are. The user you logged in to the dashboard dictates the filename for the openrc file, such as <filename>demo-openrc.sh</filename>. When logged in as admin, the file is named <filename>admin-openrc.sh</filename>.<indexterm class=\"singular\"><primary>credentials</primary></indexterm><indexterm class=\"singular\"><primary>authentication</primary></indexterm><indexterm class=\"singular\"><primary>command-line tools</primary><secondary>getting credentials</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:244(para)
msgid "The generated file looks something like this:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:272(para)
msgid "This does not save your password in plain text, which is a good thing. But when you source or run the script, it prompts you for your password and then stores your response in the environment variable <code>OS_PASSWORD</code>. It is important to note that this does require interactivity. It is possible to store a value directly in the script if you require a noninteractive operation, but you then need to be extremely cautious with the security and permissions of this file.<indexterm class=\"singular\"><primary>passwords</primary></indexterm><indexterm class=\"singular\"><primary>security issues</primary><secondary>passwords</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:288(para)
msgid "EC2 compatibility credentials can be downloaded by selecting <guimenuitem>Project</guimenuitem>, then <guimenuitem>Access &amp; Security</guimenuitem>, then <guimenuitem>API Access</guimenuitem> to display the <guilabel>Download EC2 Credentials</guilabel> button. Click the button to generate a ZIP file with server x509 certificates and a shell script fragment. Create a new directory in a secure location because these are live credentials containing all the authentication information required to access your cloud identity, unlike the default <code>user-openrc</code>. Extract the ZIP file here. You should have <filename>cacert.pem</filename>, <filename>cert.pem</filename>, <filename>ec2rc.sh</filename>, and <filename>pk.pem</filename>. The <filename>ec2rc.sh</filename> is similar to this:<indexterm class=\"singular\"><primary>access key</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:324(para)
msgid "To put the EC2 credentials into your environment, source the <code>ec2rc.sh</code> file."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:329(title)
msgid "Inspecting API Calls"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:331(para)
msgid "The command-line tools can be made to show the OpenStack API calls they make by passing the <code>--debug</code> flag to them.<indexterm class=\"singular\"><primary>API (application programming interface)</primary><secondary>API calls, inspecting</secondary></indexterm><indexterm class=\"singular\"><primary>command-line tools</primary><secondary>inspecting API calls</secondary></indexterm> For example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:345(para)
msgid "This example shows the HTTP requests from the client and the responses from the endpoints, which can be helpful in creating custom tools written to the OpenStack API."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:350(title)
msgid "Using cURL for further inspection"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:352(para)
msgid "Underlying the use of the command-line tools is the OpenStack API, which is a RESTful API that runs over HTTP. There may be cases where you want to interact with the API directly or need to use it because of a suspected bug in one of the CLI tools. The best way to do this is to use a combination of <link href=\"http://curl.haxx.se/\">cURL</link> and another tool, such as <link href=\"http://stedolan.github.io/jq/\">jq</link>, to parse the JSON from the responses.<indexterm class=\"singular\"><primary>authentication tokens</primary></indexterm><indexterm class=\"singular\"><primary>cURL</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:365(para)
msgid "The first thing you must do is authenticate with the cloud using your credentials to get an <glossterm>authentication token</glossterm>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:369(para)
msgid "Your credentials are a combination of username, password, and tenant (project). You can extract these values from the <code>openrc.sh</code> discussed above. The token allows you to interact with your other service endpoints without needing to reauthenticate for every request. Tokens are typically good for 24 hours, and when the token expires, you are alerted with a 401 (Unauthorized) response and you can request another <phrase role=\"keep-together\">token</phrase>.<indexterm class=\"singular\"><primary>catalog</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:382(para)
msgid "Look at your OpenStack service <glossterm>catalog</glossterm>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:394(para)
msgid "Read through the JSON response to get a feel for how the catalog is laid out."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:397(para)
msgid "To make working with subsequent requests easier, store the token in an environment variable:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:407(para)
msgid "Now you can refer to your token on the command line as <literal>$TOKEN</literal>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:412(para)
msgid "Pick a service endpoint from your service catalog, such as compute. Try a request, for example, listing instances (servers):"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:423(para)
msgid "To discover how API requests should be structured, read the <link href=\"http://developer.openstack.org/api-ref.html\">OpenStack API Reference</link>. To chew through the responses using jq, see the <link href=\"http://stedolan.github.io/jq/manual/\">jq Manual</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:428(para)
msgid "The <code>-s flag</code> used in the cURL commands above are used to prevent the progress meter from being shown. If you are having trouble running cURL commands, you'll want to remove it. Likewise, to help you troubleshoot cURL commands, you can include the <code>-v</code> flag to show you the verbose output. There are many more extremely useful features in cURL; refer to the man page for all the options."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:439(title)
msgid "Servers and Services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:441(para)
msgid "As an administrator, you have a few ways to discover what your OpenStack cloud looks like simply by using the OpenStack tools available. This section gives you an idea of how to get an overview of your cloud, its shape, size, and current state.<indexterm class=\"singular\"><primary>services</primary><secondary>obtaining overview of</secondary></indexterm><indexterm class=\"singular\"><primary>servers</primary><secondary>obtaining overview of</secondary></indexterm><indexterm class=\"singular\"><primary>cloud computing</primary><secondary>cloud overview</secondary></indexterm><indexterm class=\"singular\"><primary>command-line tools</primary><secondary>servers and services</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:463(para)
msgid "First, you can discover what servers belong to your OpenStack cloud by running:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:468(para)
msgid "The output looks like the following:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:482(para)
msgid "The output shows that there are five compute nodes and one cloud controller. You see a smiley face, such as <code>:-)</code>, which indicates that the services are up and running. If a service is no longer available, the <code>:-)</code> symbol changes to <code>XXX</code>. This is an indication that you should troubleshoot why the service is down."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:489(para)
msgid "If you are using cinder, run the following command to see a similar listing:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:502(para)
msgid "With these two tables, you now have a good overview of what servers and services make up your cloud."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:505(para)
msgid "You can also use the Identity Service (keystone) to see what services are available in your cloud as well as what endpoints have been configured for the services.<indexterm class=\"singular\"><primary>Identity Service</primary><secondary>displaying services and endpoints with</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:513(para)
msgid "The following command requires you to have your shell environment configured with the proper administrative variables:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:540(para)
msgid "The preceding output has been truncated to show only two services. You will see one service block for each service that your cloud provides. Note how the endpoint domain can be different depending on the endpoint type. Different endpoint domains per type are not required, but this can be done for different reasons, such as endpoint privacy or network traffic segregation."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:547(para)
msgid "You can find the version of the Compute installation by using the <literal>nova-manage</literal><phrase role=\"keep-together\">command</phrase>: <placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:553(title)
msgid "Diagnose Your Compute Nodes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:555(para)
msgid "You can obtain extra information about virtual machines that are running—their CPU usage, the memory, the disk I/O or network I/O—per instance, by running the <literal>nova diagnostics</literal> command with<indexterm class=\"singular\"><primary>compute nodes</primary><secondary>diagnosing</secondary></indexterm><indexterm class=\"singular\"><primary>command-line tools</primary><secondary>compute node diagnostics</secondary></indexterm> a server ID:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:570(para)
msgid "The output of this command varies depending on the hypervisor because hypervisors support different attributes.<indexterm class=\"singular\"><primary>hypervisors</primary><secondary>compute node diagnosis and</secondary></indexterm> The following demonstrates the difference between the two most popular hypervisors. Here is example output when the hypervisor is Xen: <placeholder-1/>While the command should work with any hypervisor that is controlled through libvirt (KVM, QEMU, or LXC), it has been tested only with KVM. Here is the example output when the hypervisor is KVM:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:620(title)
msgid "Network Inspection"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:622(para)
msgid "To see which fixed IP networks are configured in your cloud, you can use the <literal>nova</literal> command-line client to get the IP ranges:<indexterm class=\"singular\"><primary>networks</primary><secondary>inspection of</secondary></indexterm><indexterm class=\"singular\"><primary>working environment</primary><secondary>network inspection</secondary></indexterm><placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:640(para)
msgid "The <literal>nova-manage</literal> tool can provide some additional details:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:648(para)
msgid "This output shows that two networks are configured, each network containing 255 IPs (a /24 subnet). The first network has been assigned to a certain project, while the second network is still open for assignment. You can assign this network manually; otherwise, it is automatically assigned when a project launches its first instance."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:654(para)
msgid "To find out whether any floating IPs are available in your cloud, run:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:662(para)
msgid "Here, two floating IPs are available. The first has been allocated to a project, while the other is unallocated."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:667(title)
msgid "Users and Projects"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:669(para)
msgid "To see a list of projects that have been added to the cloud,<indexterm class=\"singular\"><primary>projects</primary><secondary>obtaining list of current</secondary></indexterm><indexterm class=\"singular\"><primary>user management</primary><secondary>listing users</secondary></indexterm><indexterm class=\"singular\"><primary>working environment</primary><secondary>users and projects</secondary></indexterm> run:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:702(para)
msgid "To see a list of users, run:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:725(para)
msgid "Sometimes a user and a group have a one-to-one mapping. This happens for standard system accounts, such as cinder, glance, nova, and swift, or when only one user is part of a group."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:732(title)
msgid "Running Instances"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:734(para)
msgid "To see a list of running instances,<indexterm class=\"singular\"><primary>instances</primary><secondary>list of running</secondary></indexterm><indexterm class=\"singular\"><primary>working environment</primary><secondary>running instances</secondary></indexterm> run:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:758(para)
msgid "Unfortunately, this command does not tell you various details about the running <phrase role=\"keep-together\">instances</phrase>, such as what compute node the instance is running on, what flavor the instance is, and so on. You can use the following command to view details about individual instances:<indexterm class=\"singular\"><primary>config drive</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:768(para)
msgid "For example: <placeholder-1/><placeholder-2/>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:798(para)
msgid "This output shows that an instance named <placeholder-1/> was created from an Ubuntu 12.04 image using a flavor of <literal>m1.small</literal> and is hosted on the compute node <literal>c02.example.com</literal>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:807(para)
msgid "We hope you have enjoyed this quick tour of your working environment, including how to interact with your cloud and extract useful information. From here, you can use the <emphasis><link href=\"http://docs.openstack.org/user-guide-admin/\">Admin User Guide</link></emphasis> as your reference for all of the command-line functionality in your cloud."
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:12(title)
msgid "Advanced Configuration"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:14(para)
msgid "OpenStack is intended to work well across a variety of installation flavors, from very small private clouds to large public clouds. To achieve this, the developers add configuration options to their code that allow the behavior of the various components to be tweaked depending on your needs. Unfortunately, it is not possible to cover all possible deployments with the default configuration values.<indexterm class=\"singular\"><primary>advanced configuration</primary><see>configuration options</see></indexterm><indexterm class=\"singular\"><primary>configuration options</primary><secondary>wide availability of</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:29(para)
msgid "At the time of writing, OpenStack has more than 3,000 configuration options. You can see them documented at <link href=\"http://docs.openstack.org/kilo/config-reference/content/config_overview.html\">the OpenStack configuration reference guide</link>. This chapter cannot hope to document all of these, but we do try to introduce the important concepts so that you know where to go digging for more information."
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:37(title)
msgid "Differences Between Various Drivers"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:39(para)
msgid "Many OpenStack projects implement a driver layer, and each of these drivers will implement its own configuration options. For example, in OpenStack Compute (nova), there are various hypervisor drivers implemented—libvirt, xenserver, hyper-v, and vmware, for example. Not all of these hypervisor drivers have the same features, and each has different tuning requirements.<indexterm class=\"singular\"><primary>hypervisors</primary><secondary>differences between</secondary></indexterm><indexterm class=\"singular\"><primary>drivers</primary><secondary>differences between</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:55(para)
msgid "The currently implemented hypervisors are listed on <link href=\"http://docs.openstack.org/kilo/config-reference/content/section_compute-hypervisors.html\">the OpenStack documentation website</link>. You can see a matrix of the various features in OpenStack Compute (nova) hypervisor drivers on the OpenStack wiki at <link href=\"http://docs.openstack.org/developer/nova/support-matrix.html\">the Hypervisor support matrix page</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:63(para)
msgid "The point we are trying to make here is that just because an option exists doesn't mean that option is relevant to your driver choices. Normally, the documentation notes which drivers the configuration applies to."
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:70(title)
msgid "Implementing Periodic Tasks"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:72(para)
msgid "Another common concept across various OpenStack projects is that of periodic tasks. Periodic tasks are much like cron jobs on traditional Unix systems, but they are run inside an OpenStack process. For example, when OpenStack Compute (nova) needs to work out what images it can remove from its local cache, it runs a periodic task to do this.<indexterm class=\"singular\"><primary>periodic tasks</primary></indexterm><indexterm class=\"singular\"><primary>configuration options</primary><secondary>periodic task implementation</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:85(para)
msgid "Periodic tasks are important to understand because of limitations in the threading model that OpenStack uses. OpenStack uses cooperative threading in Python, which means that if something long and complicated is running, it will block other tasks inside that process from running unless it voluntarily yields execution to another cooperative thread.<indexterm class=\"singular\"><primary>cooperative threading</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:94(para)
msgid "A tangible example of this is the <literal>nova-compute</literal> process. In order to manage the image cache with libvirt, <literal>nova-compute</literal> has a periodic process that scans the contents of the image cache. Part of this scan is calculating a checksum for each of the images and making sure that checksum matches what <literal>nova-compute</literal> expects it to be. However, images can be very large, and these checksums can take a long time to generate. At one point, before it was reported as a bug and fixed, <literal>nova-compute</literal> would block on this task and stop responding to RPC requests. This was visible to users as failure of operations such as spawning or deleting instances."
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:106(para)
msgid "The take away from this is if you observe an OpenStack process that appears to \"stop\" for a while and then continue to process normally, you should check that periodic tasks aren't the problem. One way to do this is to disable the periodic tasks by setting their interval to zero. Additionally, you can configure how often these periodic tasks run—in some cases, it might make sense to run them at a different frequency from the default."
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:114(para)
msgid "The frequency is defined separately for each periodic task. Therefore, to disable every periodic task in OpenStack Compute (nova), you would need to set a number of configuration options to zero. The current list of configuration options you would need to set to zero are:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:121(literal)
msgid "bandwidth_poll_interval"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:125(literal)
msgid "sync_power_state_interval"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:129(literal)
msgid "heal_instance_info_cache_interval"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:133(literal)
msgid "host_state_interval"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:137(literal)
msgid "image_cache_manager_interval"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:141(literal)
msgid "reclaim_instance_interval"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:145(literal)
msgid "volume_usage_poll_interval"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:149(literal)
msgid "shelved_poll_interval"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:153(literal)
msgid "shelved_offload_time"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:157(literal)
msgid "instance_delete_interval"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:161(para)
msgid "To set a configuration option to zero, include a line such as <literal>image_cache_manager_interval=0</literal> in your <filename>nova.conf</filename> file."
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:165(para)
msgid "This list will change between releases, so please refer to your configuration guide for up-to-date information."
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:170(title)
msgid "Specific Configuration Topics"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:172(para)
msgid "This section covers specific examples of configuration options you might consider tuning. It is by no means an exhaustive list."
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:176(title)
msgid "Security Configuration for Compute, Networking, and Storage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:179(para)
msgid "The <emphasis><link href=\"http://docs.openstack.org/sec/\">OpenStack Security Guide</link></emphasis> provides a deep dive into securing an OpenStack cloud, including SSL/TLS, key management, PKI and certificate management, data transport and privacy concerns, and compliance.<indexterm class=\"singular\"><primary>security issues</primary><secondary>configuration options</secondary></indexterm><indexterm class=\"singular\"><primary>configuration options</primary><secondary>security</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:195(title)
msgid "High Availability"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:197(para)
msgid "The <emphasis><link href=\"http://docs.openstack.org/high-availability-guide/content/\">OpenStack High Availability Guide</link></emphasis> offers suggestions for elimination of a single point of failure that could cause system downtime. While it is not a completely prescriptive document, it offers methods and techniques for avoiding downtime and data loss.<indexterm class=\"singular\"><primary>high availability</primary></indexterm><indexterm class=\"singular\"><primary>configuration options</primary><secondary>high availability</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:212(title)
msgid "Enabling IPv6 Support"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:214(para)
msgid "The Havana release with OpenStack Networking (neutron) does not offer complete support of IPv6. Better support has been delivered in the Kilo release, and will continue to improve in Liberty. You can follow along the progress being made by watching the <link href=\"https://wiki.openstack.org/wiki/Meetings/Neutron-IPv6-Subteam\">neutron IPv6 Subteam at work</link>.<indexterm class=\"singular\"><primary>Liberty</primary><secondary>IPv6 support</secondary></indexterm><indexterm class=\"singular\"><primary>IPv6, enabling support for</primary></indexterm><indexterm class=\"singular\"><primary>configuration options</primary><secondary>IPv6 support</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:231(para)
msgid "By modifying your configuration setup, you can set up IPv6 when using <literal>nova-network</literal> for networking, and a tested setup is documented for FlatDHCP and a multi-host configuration. The key is to make <literal>nova-network</literal> think a <literal>radvd</literal> command ran successfully. The entire configuration is detailed in a Cybera blog post, <link href=\"http://www.cybera.ca/news-and-events/tech-radar/an-ipv6-enabled-cloud/\">“An IPv6 enabled cloud”</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:241(title)
msgid "Periodic Task Frequency for Compute"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:243(para)
msgid "Before the Grizzly release, the frequency of periodic tasks was specified in seconds between runs. This meant that if the periodic task took 30 minutes to run and the frequency was set to hourly, then the periodic task actually ran every 90 minutes, because the task would wait an hour after running before running again. This changed in Grizzly, and we now time the frequency of periodic tasks from the start of the work the task does. So, our 30 minute periodic task will run every hour, with a 30 minute wait between the end of the first run and the start of the next.<indexterm class=\"singular\"><primary>configuration options</primary><secondary>periodic task frequency</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:259(title)
msgid "Geographical Considerations for Object Storage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:261(para)
msgid "Enhanced support for global clustering of object storage servers continues to be added since the Grizzly (1.8.0) release, when regions were introduced. You would implement these global clusters to ensure replication across geographic areas in case of a natural disaster and also to ensure that users can write or access their objects more quickly based on the closest data center. You configure a default region with one zone for each cluster, but be sure your network (WAN) can handle the additional request and response load between zones as you add more zones and build a ring that handles more zones. Refer to <link href=\"http://docs.openstack.org/developer/swift/admin_guide.html#geographically-distributed-clusters\">Geographically Distributed Clusters</link> in the documentation for additional information.<indexterm class=\"singular\"><primary>Object Storage</primary><secondary>geographical considerations</secondary></indexterm><indexterm class=\"singular\"><primary>storage</primary><secondary>geographical considerations</secondary></indexterm><indexterm class=\"singular\"><primary>configuration options</primary><secondary>geographical storage considerations</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:12(title)
msgid "Network Design"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:14(para)
msgid "OpenStack provides a rich networking environment, and this chapter details the requirements and options to deliberate when designing your cloud.<indexterm class=\"singular\"><primary>network design</primary><secondary>first steps</secondary></indexterm><indexterm class=\"singular\"><primary>design considerations</primary><secondary>network design</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:27(para)
msgid "If this is the first time you are deploying a cloud infrastructure in your organization, after reading this section, your first conversations should be with your networking team. Network usage in a running cloud is vastly different from traditional network deployments and has the potential to be disruptive at both a connectivity and a policy level.<indexterm class=\"singular\"><primary>cloud computing</primary><secondary>vs. traditional deployments</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:39(para)
msgid "For example, you must plan the number of IP addresses that you need for both your guest instances as well as management infrastructure. Additionally, you must research and discuss cloud network connectivity through proxy servers and firewalls."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:44(para)
msgid "In this chapter, we'll give some examples of network implementations to consider and provide information about some of the network layouts that OpenStack uses. Finally, we have some brief notes on the networking services that are essential for stable operation."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:50(title)
msgid "Management Network"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:52(para)
msgid "A <glossterm>management network</glossterm> (a separate network for use by your cloud operators) typically consists of a separate switch and separate NICs (network interface cards), and is a recommended option. This segregation prevents system administration and the monitoring of system access from being disrupted by traffic generated by guests.<indexterm class=\"singular\"><primary>NICs (network interface cards)</primary></indexterm><indexterm class=\"singular\"><primary>management network</primary></indexterm><indexterm class=\"singular\"><primary>network design</primary><secondary>management network</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:67(para)
msgid "Consider creating other private networks for communication between internal components of OpenStack, such as the message queue and OpenStack Compute. Using a virtual local area network (VLAN) works well for these scenarios because it provides a method for creating multiple virtual networks on a physical network."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:75(title)
msgid "Public Addressing Options"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:77(para)
msgid "There are two main types of IP addresses for guest virtual machines: fixed IPs and floating IPs. Fixed IPs are assigned to instances on boot, whereas floating IP addresses can change their association between instances by action of the user. Both types of IP addresses can be either public or private, depending on your use case.<indexterm class=\"singular\"><primary>IP addresses</primary><secondary>public addressing options</secondary></indexterm><indexterm class=\"singular\"><primary>network design</primary><secondary>public addressing options</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:91(para)
msgid "Fixed IP addresses are required, whereas it is possible to run OpenStack without floating IPs. One of the most common use cases for floating IPs is to provide public IP addresses to a private cloud, where there are a limited number of IP addresses available. Another is for a public cloud user to have a \"static\" IP address that can be reassigned when an instance is upgraded or moved.<indexterm class=\"singular\"><primary>IP addresses</primary><secondary>static</secondary></indexterm><indexterm class=\"singular\"><primary>static IP addresses</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:104(para)
msgid "Fixed IP addresses can be private for private clouds, or public for public clouds. When an instance terminates, its fixed IP is lost. It is worth noting that newer users of cloud computing may find their ephemeral nature frustrating.<indexterm class=\"singular\"><primary>IP addresses</primary><secondary>fixed</secondary></indexterm><indexterm class=\"singular\"><primary>fixed IP addresses</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:117(title)
msgid "IP Address Planning"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:119(para)
msgid "An OpenStack installation can potentially have many subnets (ranges of IP addresses) and different types of services in each. An IP address plan can assist with a shared understanding of network partition purposes and scalability. Control services can have public and private IP addresses, and as noted above, there are a couple of options for an instance's public addresses.<indexterm class=\"singular\"><primary>IP addresses</primary><secondary>address planning</secondary></indexterm><indexterm class=\"singular\"><primary>network design</primary><secondary>IP address planning</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:134(para)
msgid "An IP address plan might be broken down into the following sections:<indexterm class=\"singular\"><primary>IP addresses</primary><secondary>sections of</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:143(term)
msgid "Subnet router"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:146(para)
msgid "Packets leaving the subnet go via this address, which could be a dedicated router or a <literal>nova-network</literal> service."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:153(term)
msgid "Control services public interfaces"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:156(para)
msgid "Public access to <code>swift-proxy</code>, <code>nova-api</code>, <code>glance-api</code>, and horizon come to these addresses, which could be on one side of a load balancer or pointing at individual machines."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:164(term)
msgid "Object Storage cluster internal communications"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:167(para)
msgid "Traffic among object/account/container servers and between these and the proxy server's internal interface uses this private network.<indexterm class=\"singular\"><primary>containers</primary><secondary>container servers</secondary></indexterm><indexterm class=\"singular\"><primary>objects</primary><secondary>object servers</secondary></indexterm><indexterm class=\"singular\"><primary>account server</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:184(term)
msgid "Compute and storage communications"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:187(para)
msgid "If ephemeral or block storage is external to the compute node, this network is used."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:193(term)
msgid "Out-of-band remote management"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:196(para)
msgid "If a dedicated remote access controller chip is included in servers, often these are on a separate network."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:202(term)
msgid "In-band remote management"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:205(para)
msgid "Often, an extra (such as 1 GB) interface on compute or storage nodes is used for system administrators or monitoring tools to access the host instead of going through the public interface."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:213(term)
msgid "Spare space for future growth"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:216(para)
msgid "Adding more public-facing control services or guest instance IPs should always be part of your plan."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:222(para)
msgid "For example, take a deployment that has both OpenStack Compute and Object Storage, with private ranges 172.22.42.0/24 and 172.22.87.0/26 available. One way to segregate the space might be as follows:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:245(para)
msgid "A similar approach can be taken with public IP addresses, taking note that large, flat ranges are preferred for use with guest instance IPs. Take into account that for some OpenStack networking options, a public IP address in the range of a guest instance public IP address is assigned to the <literal>nova-compute</literal> host."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:253(title)
msgid "Network Topology"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:255(para)
msgid "OpenStack Compute with <literal>nova-network</literal> provides predefined network deployment models, each with its own strengths and weaknesses. The selection of a network manager changes your network topology, so the choice should be made carefully. You also have a choice between the tried-and-true legacy <literal>nova-network</literal> settings or the <phrase role=\"keep-together\">neutron</phrase> project for OpenStack Networking. Both offer networking for launched instances with different implementations and requirements.<indexterm class=\"singular\"><primary>networks</primary><secondary>deployment options</secondary></indexterm><indexterm class=\"singular\"><primary>networks</primary><secondary>network managers</secondary></indexterm><indexterm class=\"singular\"><primary>network design</primary><secondary>network topology</secondary><tertiary>deployment options</tertiary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:278(para)
msgid "For OpenStack Networking with the neutron project, typical configurations are documented with the idea that any setup you can configure with real hardware you can re-create with a software-defined equivalent. Each tenant can contain typical network elements such as routers, and services such as DHCP."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:284(para)
msgid "<xref linkend=\"network_deployment_options\"/> describes the networking deployment options for both legacy <literal>nova-network</literal> options and an equivalent neutron configuration.<indexterm class=\"singular\"><primary>provisioning/deployment</primary><secondary>network deployment options</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:294(caption)
msgid "Networking deployment options"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:306(th)
msgid "Network deployment model"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:308(th)
msgid "Strengths"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:310(th)
msgid "Weaknesses"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:312(th)
msgid "Neutron equivalent"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:318(para)
msgid "Flat"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:320(para)
msgid "Extremely simple topology."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:320(para)
msgid "No DHCP overhead."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:323(para)
msgid "Requires file injection into the instance to configure network interfaces."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:326(td)
msgid "Configure a single bridge as the integration bridge (br-int) and connect it to a physical network interface with the Modular Layer 2 (ML2) plug-in, which uses Open vSwitch by default."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:334(para)
msgid "Relatively simple to deploy."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:334(para)
msgid "Standard networking."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:335(para)
msgid "Works with all guest operating systems."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:338(para) ./doc/openstack-ops/ch_arch_network_design.xml:350(para)
msgid "Requires its own DHCP broadcast domain."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:340(td)
msgid "Configure DHCP agents and routing agents. Network Address Translation (NAT) performed outside of compute nodes, typically on one or more network nodes."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:346(para)
msgid "VlanManager"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:348(para)
msgid "Each tenant is isolated to its own VLANs."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:350(para) ./doc/openstack-ops/ch_arch_network_design.xml:372(para)
msgid "More complex to set up."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:351(para)
msgid "Requires many VLANs to be trunked onto a single port."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:352(para)
msgid "Standard VLAN number limitation."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:353(para)
msgid "Switches must support 802.1q VLAN tagging."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:356(para)
msgid "Isolated tenant networks implement some form of isolation of layer 2 traffic between distinct networks. VLAN tagging is key concept, where traffic is “tagged” with an ordinal identifier for the VLAN. Isolated network implementations may or may not include additional services like DHCP, NAT, and routing."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:364(para)
msgid "FlatDHCP Multi-host with high availability (HA)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:367(para)
msgid "Networking failure is isolated to the VMs running on the affected hypervisor."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:368(para)
msgid "DHCP traffic can be isolated within an individual host."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:369(para)
msgid "Network traffic is distributed to the compute nodes."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:372(para)
msgid "Compute nodes typically need IP addresses accessible by external networks."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:374(para)
msgid "Options must be carefully configured for live migration to work with networking services."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:377(para)
msgid "Configure neutron with multiple DHCP and layer-3 agents. Network nodes are not able to failover to each other, so the controller runs networking services, such as DHCP. Compute nodes run the ML2 plug-in with support for agents such as Open vSwitch or Linux Bridge."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:386(para)
msgid "Both <literal>nova-network</literal> and neutron services provide similar capabilities, such as VLAN between VMs. You also can provide multiple NICs on VMs with either service. Further discussion follows."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:392(title)
msgid "VLAN Configuration Within OpenStack VMs"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:394(para)
msgid "VLAN configuration can be as simple or as complicated as desired. The use of VLANs has the benefit of allowing each project its own subnet and broadcast segregation from other projects. To allow OpenStack to efficiently use VLANs, you must allocate a VLAN range (one for each project) and turn each compute node switch port into a trunk port.<indexterm class=\"singular\"><primary>networks</primary><secondary>VLAN</secondary></indexterm><indexterm class=\"singular\"><primary>VLAN network</primary></indexterm><indexterm class=\"singular\"><primary>network design</primary><secondary>network topology</secondary><tertiary>VLAN with OpenStack VMs</tertiary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:413(para)
msgid "For example, if you estimate that your cloud must support a maximum of 100 projects, pick a free VLAN range that your network infrastructure is currently not using (such as VLAN 200299). You must configure OpenStack with this range and also configure your switch ports to allow VLAN traffic from that range."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:421(title)
msgid "Multi-NIC Provisioning"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:423(para)
msgid "OpenStack Networking with <literal>neutron</literal> and OpenStack Compute with nova-network have the ability to assign multiple NICs to instances. For nova-network this can be done on a per-request basis, with each additional NIC using up an entire subnet or VLAN, reducing the total number of supported projects.<indexterm class=\"singular\"><primary>MultiNic</primary></indexterm><indexterm class=\"singular\"><primary>network design</primary><secondary>network topology</secondary><tertiary>multi-NIC provisioning</tertiary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:440(title)
msgid "Multi-Host and Single-Host Networking"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:442(para)
msgid "The <literal>nova-network</literal> service has the ability to operate in a multi-host or single-host mode. Multi-host is when each compute node runs a copy of <literal>nova-network</literal> and the instances on that compute node use the compute node as a gateway to the Internet. The compute nodes also host the floating IPs and security groups for instances on that node. Single-host is when a central server—for example, the cloud controller—runs the <code>nova-network</code> service. All compute nodes forward traffic from the instances to the cloud controller. The cloud controller then forwards traffic to the Internet. The cloud controller hosts the floating IPs and security groups for all instances on all compute nodes in the cloud.<indexterm class=\"singular\"><primary>single-host networking</primary></indexterm><indexterm class=\"singular\"><primary>networks</primary><secondary>multi-host</secondary></indexterm><indexterm class=\"singular\"><primary>multi-host networking</primary></indexterm><indexterm class=\"singular\"><primary>network design</primary><secondary>network topology</secondary><tertiary>multi- vs. single-host networking</tertiary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:469(para)
msgid "There are benefits to both modes. Single-node has the downside of a single point of failure. If the cloud controller is not available, instances cannot communicate on the network. This is not true with multi-host, but multi-host requires that each compute node has a public IP address to communicate on the Internet. If you are not able to obtain a significant block of public IP addresses, multi-host might not be an option."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:480(title)
msgid "Services for Networking"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:482(para)
msgid "OpenStack, like any network application, has a number of standard considerations to apply, such as NTP and DNS.<indexterm class=\"singular\"><primary>network design</primary><secondary>services for networking</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:490(title)
msgid "NTP"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:492(para)
msgid "Time synchronization is a critical element to ensure continued operation of OpenStack components. Correct time is necessary to avoid errors in instance scheduling, replication of objects in the object store, and even matching log timestamps for debugging.<indexterm class=\"singular\"><primary>networks</primary><secondary>Network Time Protocol (NTP)</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:502(para)
msgid "All servers running OpenStack components should be able to access an appropriate NTP server. You may decide to set up one locally or use the public pools available from the <link href=\"http://www.pool.ntp.org/en/\"> Network Time Protocol project</link>."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:510(title)
msgid "DNS"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:512(para)
msgid "OpenStack does not currently provide DNS services, aside from the dnsmasq daemon, which resides on <code>nova-network</code> hosts. You could consider providing a dynamic DNS service to allow instances to update a DNS entry with new IP addresses. You can also consider making a generic forward and reverse DNS mapping for instances' IP addresses, such as vm-203-0-113-123.example.com.<indexterm class=\"singular\"><primary>DNS (Domain Name Server, Service or System)</primary><secondary>DNS service choices</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:528(para)
msgid "Armed with your IP address layout and numbers and knowledge about the topologies and services you can use, it's now time to prepare the network for your installation. Be sure to also check out the <link href=\"http://docs.openstack.org/sec/\" title=\"OpenStack Security Guide\"><emphasis>OpenStack Security Guide</emphasis></link> for tips on securing your network. We wish you a good relationship with your networking team!"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:12(title)
msgid "Backup and Recovery"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:14(para)
msgid "Standard backup best practices apply when creating your OpenStack backup policy. For example, how often to back up your data is closely related to how quickly you need to recover from data loss.<indexterm class=\"singular\"><primary>backup/recovery</primary><secondary>considerations</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:24(para)
msgid "If you cannot have any data loss at all, you should also focus on a highly available deployment. The <emphasis><link href=\"http://docs.openstack.org/high-availability-guide/content/\">OpenStack High Availability Guide</link></emphasis> offers suggestions for elimination of a single point of failure that could cause system downtime. While it is not a completely prescriptive document, it offers methods and techniques for avoiding downtime and data loss.<indexterm class=\"singular\"><primary>data</primary><secondary>preventing loss of</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:37(para)
msgid "Other backup considerations include:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:41(para)
msgid "How many backups to keep?"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:45(para)
msgid "Should backups be kept off-site?"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:49(para)
msgid "How often should backups be tested?"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:53(para)
msgid "Just as important as a backup policy is a recovery policy (or at least recovery testing)."
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:57(title)
msgid "What to Back Up"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:59(para)
msgid "While OpenStack is composed of many components and moving parts, backing up the critical data is quite simple.<indexterm class=\"singular\"><primary>backup/recovery</primary><secondary>items included</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:66(para)
msgid "This chapter describes only how to back up configuration files and databases that the various OpenStack components need to run. This chapter does not describe how to back up objects inside Object Storage or data contained inside Block Storage. Generally these areas are left for users to back up on their own."
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:74(title)
msgid "Database Backups"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:76(para)
msgid "The example OpenStack architecture designates the cloud controller as the MySQL server. This MySQL server hosts the databases for nova, glance, cinder, and keystone. With all of these databases in one place, it's very easy to create a database backup:<indexterm class=\"singular\"><primary>databases</primary><secondary>backup/recovery of</secondary></indexterm><indexterm class=\"singular\"><primary>backup/recovery</primary><secondary>databases</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:91(para)
msgid "If you only want to backup a single database, you can instead run:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:96(para)
msgid "where <code>nova</code> is the database you want to back up."
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:98(para)
msgid "You can easily automate this process by creating a cron job that runs the following script once per day:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:109(para)
msgid "This script dumps the entire MySQL database and deletes any backups older than seven days."
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:114(title)
msgid "File System Backups"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:116(para)
msgid "This section discusses which files and directories should be backed up regularly, organized by service.<indexterm class=\"singular\"><primary>file systems</primary><secondary>backup/recovery of</secondary></indexterm><indexterm class=\"singular\"><primary>backup/recovery</primary><secondary>file systems</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:130(para)
msgid "The <filename>/etc/nova</filename> directory on both the cloud controller and compute nodes should be regularly backed up.<indexterm class=\"singular\"><primary>cloud controllers</primary><secondary>file system backups and</secondary></indexterm><indexterm class=\"singular\"><primary>compute nodes</primary><secondary>backup/recovery of</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:142(para)
msgid "<code>/var/log/nova</code> does not need to be backed up if you have all logs going to a central area. It is highly recommended to use a central logging server or back up the log directory."
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:146(para)
msgid "<code>/var/lib/nova</code> is another important directory to back up. The exception to this is the <code>/var/lib/nova/instances</code> subdirectory on compute nodes. This subdirectory contains the KVM images of running instances. You would want to back up this directory only if you need to maintain backup copies of all instances. Under most circumstances, you do not need to do this, but this can vary from cloud to cloud and your service levels. Also be aware that making a backup of a live KVM instance can cause that instance to not boot properly if it is ever restored from a backup."
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:158(title)
msgid "Image Catalog and Delivery"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:160(para)
msgid "<code>/etc/glance</code> and <code>/var/log/glance</code> follow the same rules as their nova counterparts.<indexterm class=\"singular\"><primary>Image service</primary><secondary>backup/recovery of</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:167(para)
msgid "<code>/var/lib/glance</code> should also be backed up. Take special notice of <code>/var/lib/glance/images</code>. If you are using a file-based backend of glance, <code>/var/lib/glance/images</code> is where the images are stored and care should be taken."
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:172(para)
msgid "There are two ways to ensure stability with this directory. The first is to make sure this directory is run on a RAID array. If a disk fails, the directory is available. The second way is to use a tool such as rsync to replicate the images to another server:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:182(title)
msgid "Identity"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:184(para)
msgid "<code>/etc/keystone</code> and <code>/var/log/keystone</code> follow the same rules as other components.<indexterm class=\"singular\"><primary>Identity Service</primary><secondary>backup/recovery</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:191(para)
msgid "<code>/var/lib/keystone</code>, although it should not contain any data being used, can also be backed up just in case."
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:198(para)
msgid "<code>/etc/cinder</code> and <code>/var/log/cinder</code> follow the same rules as other components.<indexterm class=\"singular\"><primary>Block Storage</primary></indexterm><indexterm class=\"singular\"><primary>storage</primary><secondary>block storage</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:207(para)
msgid "<code>/var/lib/cinder</code> should also be backed up."
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:213(para)
msgid "<code>/etc/swift</code> is very important to have backed up. This directory contains the swift configuration files as well as the ring files and ring <glossterm>builder file</glossterm>s, which if lost, render the data on your cluster inaccessible. A best practice is to copy the builder files to all storage nodes along with the ring files. Multiple backup copies are spread throughout your storage cluster.<indexterm class=\"singular\"><primary>builder files</primary></indexterm><indexterm class=\"singular\"><primary>rings</primary><secondary>ring builders</secondary></indexterm><indexterm class=\"singular\"><primary>Object Storage</primary><secondary>backup/recovery of</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:234(title)
msgid "Recovering Backups"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:236(para)
msgid "Recovering backups is a fairly simple process. To begin, first ensure that the service you are recovering is not running. For example, to do a full recovery of <literal>nova</literal> on the cloud controller, first stop all <code>nova</code> services:<indexterm class=\"singular\"><primary>recovery</primary><seealso>backup/recovery</seealso></indexterm><indexterm class=\"singular\"><primary>backup/recovery</primary><secondary>recovering backups</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:258(para)
msgid "Now you can import a previously backed-up database:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:262(para)
msgid "You can also restore backed-up nova directories:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:267(para)
msgid "Once the files are restored, start everything back up:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:276(para)
msgid "Other services follow the same process, with their respective directories and <phrase role=\"keep-together\">databases</phrase>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:283(para)
msgid "Backup and subsequent recovery is one of the first tasks system administrators learn. However, each system has different items that need attention. By taking care of your database, image service, and appropriate file system locations, you can be assured that you can handle any event requiring recovery."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:12(title)
msgid "Maintenance, Failures, and Debugging"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:14(para)
msgid "Downtime, whether planned or unscheduled, is a certainty when running a cloud. This chapter aims to provide useful information for dealing proactively, or reactively, with these occurrences.<indexterm class=\"startofrange\" xml:id=\"maindebug\"><primary>maintenance/debugging</primary><seealso>troubleshooting</seealso></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:26(title)
msgid "Cloud Controller and Storage Proxy Failures and Maintenance"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:28(para)
msgid "The cloud controller and storage proxy are very similar to each other when it comes to expected and unexpected downtime. One of each server type typically runs in the cloud, which makes them very noticeable when they are not running."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:33(para)
msgid "For the cloud controller, the good news is if your cloud is using the FlatDHCP multi-host HA network mode, existing instances and volumes continue to operate while the cloud controller is offline. For the storage proxy, however, no storage traffic is possible until it is back up and running."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:42(title) ./doc/openstack-ops/ch_ops_maintenance.xml:174(title)
msgid "Planned Maintenance"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:44(para)
msgid "One way to plan for cloud controller or storage proxy maintenance is to simply do it off-hours, such as at 1 a.m. or 2 a.m. This strategy affects fewer users. If your cloud controller or storage proxy is too important to have unavailable at any point in time, you must look into high-availability options.<indexterm class=\"singular\"><primary>cloud controllers</primary><secondary>planned maintenance of</secondary></indexterm><indexterm class=\"singular\"><primary>maintenance/debugging</primary><secondary>cloud controller planned maintenance</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:62(title)
msgid "Rebooting a Cloud Controller or Storage Proxy"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:64(para)
msgid "All in all, just issue the \"reboot\" command. The operating system cleanly shuts down services and then automatically reboots. If you want to be very thorough, run your backup jobs just before you reboot.<indexterm class=\"singular\"><primary>maintenance/debugging</primary><secondary>rebooting following</secondary></indexterm><indexterm class=\"singular\"><primary>storage</primary><secondary>storage proxy maintenance</secondary></indexterm><indexterm class=\"singular\"><primary>reboot</primary><secondary>cloud controller or storage proxy</secondary></indexterm><indexterm class=\"singular\"><primary>cloud controllers</primary><secondary>rebooting</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:89(title)
msgid "After a Cloud Controller or Storage Proxy Reboots"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:91(para)
msgid "After a cloud controller reboots, ensure that all required services were successfully started. The following commands use <code>ps</code> and <code>grep</code> to determine if nova, glance, and keystone are currently running:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:101(para)
msgid "Also check that all services are functioning. The following set of commands sources the <code>openrc</code> file, then runs some basic glance, nova, and keystone commands. If the commands work as expected, you can be confident that those services are in working condition:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:112(para)
msgid "For the storage proxy, ensure that the Object Storage service has resumed:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:117(para)
msgid "Also check that it is functioning:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:125(title)
msgid "Total Cloud Controller Failure"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:127(para)
msgid "The cloud controller could completely fail if, for example, its motherboard goes bad. Users will immediately notice the loss of a cloud controller since it provides core functionality to your cloud environment. If your infrastructure monitoring does not alert you that your cloud controller has failed, your users definitely will. Unfortunately, this is a rough situation. The cloud controller is an integral part of your cloud. If you have only one controller, you will have many missing services if it goes down.<indexterm class=\"singular\"><primary>cloud controllers</primary><secondary>total failure of</secondary></indexterm><indexterm class=\"singular\"><primary>maintenance/debugging</primary><secondary>cloud controller total failure</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:144(para)
msgid "To avoid this situation, create a highly available cloud controller cluster. This is outside the scope of this document, but you can read more in the draft <link href=\"http://docs.openstack.org/high-availability-guide/content/ch-intro.html\">OpenStack High Availability Guide</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:150(para)
msgid "The next best approach is to use a configuration-management tool, such as Puppet, to automatically build a cloud controller. This should not take more than 15 minutes if you have a spare server available. After the controller rebuilds, restore any backups taken (see <xref linkend=\"backup_and_recovery\"/>)."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:156(para)
msgid "Also, in practice, the <literal>nova-compute</literal> services on the compute nodes do not always reconnect cleanly to rabbitmq hosted on the controller when it comes back up after a long reboot; a restart on the nova services on the compute nodes is required."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:166(title)
msgid "Compute Node Failures and Maintenance"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:168(para)
msgid "Sometimes a compute node either crashes unexpectedly or requires a reboot for maintenance reasons."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:176(para)
msgid "If you need to reboot a compute node due to planned maintenance (such as a software or hardware upgrade), first ensure that all hosted instances have been moved off the node. If your cloud is utilizing shared storage, use the <code>nova live-migration</code> command. First, get a list of instances that need to be moved:<indexterm class=\"singular\"><primary>compute nodes</primary><secondary>maintenance</secondary></indexterm><indexterm class=\"singular\"><primary>maintenance/debugging</primary><secondary>compute node planned maintenance</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:193(para)
msgid "Next, migrate them one by one:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:197(para)
msgid "If you are not using shared storage, you can use the <code>--block-migrate</code> option:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:202(para)
msgid "After you have migrated all instances, ensure that the <code>nova-compute</code> service has <phrase role=\"keep-together\">stopped</phrase>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:208(para)
msgid "If you use a configuration-management system, such as Puppet, that ensures the <code>nova-compute</code> service is always running, you can temporarily move the <literal>init</literal> files:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:216(para)
msgid "Next, shut down your compute node, perform your maintenance, and turn the node back on. You can reenable the <code>nova-compute</code> service by undoing the previous commands:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:223(para)
msgid "Then start the <code>nova-compute</code> service:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:227(para)
msgid "You can now optionally migrate the instances back to their original compute node."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:234(title)
msgid "After a Compute Node Reboots"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:236(para)
msgid "When you reboot a compute node, first verify that it booted successfully. This includes ensuring that the <code>nova-compute</code> service is running:<indexterm class=\"singular\"><primary>reboot</primary><secondary>compute node</secondary></indexterm><indexterm class=\"singular\"><primary>maintenance/debugging</primary><secondary>compute node reboot</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:251(para)
msgid "Also ensure that it has successfully connected to the AMQP server:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:257(para)
msgid "After the compute node is successfully running, you must deal with the instances that are hosted on that compute node because none of them are running. Depending on your SLA with your users or customers, you might have to start each instance and ensure that they start correctly."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:269(para)
msgid "You can create a list of instances that are hosted on the compute node by performing the following command:<indexterm class=\"singular\"><primary>instances</primary><secondary>maintenance/debugging</secondary></indexterm><indexterm class=\"singular\"><primary>maintenance/debugging</primary><secondary>instances</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:282(para)
msgid "After you have the list, you can use the nova command to start each instance:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:288(para)
msgid "Any time an instance shuts down unexpectedly, it might have problems on boot. For example, the instance might require an <code>fsck</code> on the root partition. If this happens, the user can use the dashboard VNC console to fix this."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:294(para)
msgid "If an instance does not boot, meaning <code>virsh list</code> never shows the instance as even attempting to boot, do the following on the compute node:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:300(para)
msgid "Try executing the <code>nova reboot</code> command again. You should see an error message about why the instance was not able to boot"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:304(para)
msgid "In most cases, the error is the result of something in libvirt's XML file (<code>/etc/libvirt/qemu/instance-xxxxxxxx.xml</code>) that no longer exists. You can enforce re-creation of the XML file as well as rebooting the instance by running the following command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:315(title)
msgid "Inspecting and Recovering Data from Failed Instances"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:317(para)
msgid "In some scenarios, instances are running but are inaccessible through SSH and do not respond to any command. The VNC console could be displaying a boot failure or kernel panic error messages. This could be an indication of file system corruption on the VM itself. If you need to recover files or inspect the content of the instance, qemu-nbd can be used to mount the disk.<indexterm class=\"singular\"><primary>data</primary><secondary>inspecting/recovering failed instances</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:329(para)
msgid "If you access or view the user's content and data, get approval first!<indexterm class=\"singular\"><primary>security issues</primary><secondary>failed instance data inspection</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:337(para)
msgid "To access the instance's disk (<literal>/var/lib/nova/instances/instance-<replaceable>xxxxxx</replaceable>/disk</literal>), use the following steps:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:343(para)
msgid "Suspend the instance using the <literal>virsh</literal> command."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:348(para)
msgid "Connect the qemu-nbd device to the disk."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:352(para) ./doc/openstack-ops/ch_ops_maintenance.xml:412(para)
msgid "Mount the qemu-nbd device."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:356(para)
msgid "Unmount the device after inspecting."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:360(para)
msgid "Disconnect the qemu-nbd device."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:364(para)
msgid "Resume the instance."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:368(para)
msgid "If you do not follow steps 4 through 6, OpenStack Compute cannot manage the instance any longer. It fails to respond to any command issued by OpenStack Compute, and it is marked as shut down."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:372(para)
msgid "Once you mount the disk file, you should be able to access it and treat it as a collection of normal directories with files and a directory structure. However, we do not recommend that you edit or touch any files because this could change the access control lists (ACLs) that are used to determine which accounts can perform what operations on files and directories. Changing ACLs can make the instance unbootable if it is not already.<indexterm class=\"singular\"><primary>access control list (ACL)</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:384(para)
msgid "Suspend the instance using the <literal>virsh</literal> command, taking note of the internal ID:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:399(para)
msgid "Connect the qemu-nbd device to the disk:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:414(para)
msgid "The qemu-nbd device tries to export the instance disk's different partitions as separate devices. For example, if vda is the disk and vda1 is the root partition, qemu-nbd exports the device as <literal>/dev/nbd0</literal> and <literal>/dev/nbd0p1</literal>, respectively:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:422(para)
msgid "You can now access the contents of <code>/mnt</code>, which correspond to the first partition of the instance's disk."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:425(para)
msgid "To examine the secondary or ephemeral disk, use an alternate mount point if you want both primary and secondary drives mounted at the same time:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:458(para)
msgid "Once you have completed the inspection, unmount the mount point and release the qemu-nbd device:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:467(para)
msgid "Resume the instance using <literal>virsh</literal>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:485(title)
msgid "Volumes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:487(para)
msgid "If the affected instances also had attached volumes, first generate a list of instance and volume UUIDs:<indexterm class=\"singular\"><primary>volume</primary><secondary>maintenance/debugging</secondary></indexterm><indexterm class=\"singular\"><primary>maintenance/debugging</primary><secondary>volumes</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:505(para)
msgid "You should see a result similar to the following:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:515(para)
msgid "Next, manually detach and reattach the volumes, where X is the proper mount point:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:521(para)
msgid "Be sure that the instance has successfully booted and is at a login screen before doing the above."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:528(title)
msgid "Total Compute Node Failure"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:530(para)
msgid "Compute nodes can fail the same way a cloud controller can fail. A motherboard failure or some other type of hardware failure can cause an entire compute node to go offline. When this happens, all instances running on that compute node will not be available. Just like with a cloud controller failure, if your infrastructure monitoring does not detect a failed compute node, your users will notify you because of their lost instances.<indexterm class=\"singular\"><primary>compute nodes</primary><secondary>failures</secondary></indexterm><indexterm class=\"singular\"><primary>maintenance/debugging</primary><secondary>compute node total failures</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:546(para)
msgid "If a compute node fails and won't be fixed for a few hours (or at all), you can relaunch all instances that are hosted on the failed node if you use shared storage for <code>/var/lib/nova/instances</code>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:551(para)
msgid "To do this, generate a list of instance UUIDs that are hosted on the failed node by running the following query on the nova database:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:558(para)
msgid "Next, update the nova database to indicate that all instances that used to be hosted on c01.example.com are now hosted on c02.example.com:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:565(para)
msgid "After that, use the <literal>nova</literal> command to reboot all instances that were on c01.example.com while regenerating their XML files at the same time:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:571(para)
msgid "Finally, reattach volumes using the same method described in the section <link linkend=\"volumes\">Volumes</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:578(title)
msgid "/var/lib/nova/instances"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:580(para)
msgid "It's worth mentioning this directory in the context of failed compute nodes. This directory contains the libvirt KVM file-based disk images for the instances that are hosted on that compute node. If you are not running your cloud in a shared storage environment, this directory is unique across all compute nodes.<indexterm class=\"singular\"><primary>/var/lib/nova/instances directory</primary></indexterm><indexterm class=\"singular\"><primary>maintenance/debugging</primary><secondary>/var/lib/nova/instances</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:592(para)
msgid "<code>/var/lib/nova/instances</code> contains two types of directories."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:595(para)
msgid "The first is the <code>_base</code> directory. This contains all the cached base images from glance for each unique image that has been launched on that compute node. Files ending in <code>_20</code> (or a different number) are the ephemeral base images."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:600(para)
msgid "The other directories are titled <code>instance-xxxxxxxx</code>. These directories correspond to instances running on that compute node. The files inside are related to one of the files in the <code>_base</code> directory. They're essentially differential-based files containing only the changes made from the original <code>_base</code> directory."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:607(para)
msgid "All files and directories in <code>/var/lib/nova/instances</code> are uniquely named. The files in _base are uniquely titled for the glance image that they are based on, and the directory names <code>instance-xxxxxxxx</code> are uniquely titled for that particular instance. For example, if you copy all data from <code>/var/lib/nova/instances</code> on one compute node to another, you do not overwrite any files or cause any damage to images that have the same unique name, because they are essentially the same file."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:616(para)
msgid "Although this method is not documented or supported, you can use it when your compute node is permanently offline but you have instances locally stored on it."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:625(title)
msgid "Storage Node Failures and Maintenance"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:627(para)
msgid "Because of the high redundancy of Object Storage, dealing with object storage node issues is a lot easier than dealing with compute node issues."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:634(title)
msgid "Rebooting a Storage Node"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:636(para)
msgid "If a storage node requires a reboot, simply reboot it. Requests for data hosted on that node are redirected to other copies while the server is rebooting.<indexterm class=\"singular\"><primary>storage node</primary></indexterm><indexterm class=\"singular\"><primary>nodes</primary><secondary>storage nodes</secondary></indexterm><indexterm class=\"singular\"><primary>maintenance/debugging</primary><secondary>storage node reboot</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:654(title)
msgid "Shutting Down a Storage Node"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:656(para)
msgid "If you need to shut down a storage node for an extended period of time (one or more days), consider removing the node from the storage ring. For example:<indexterm class=\"singular\"><primary>maintenance/debugging</primary><secondary>storage node shut down</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:671(para)
msgid "Next, redistribute the ring files to the other nodes:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:678(para)
msgid "These actions effectively take the storage node out of the storage cluster."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:681(para)
msgid "When the node is able to rejoin the cluster, just add it back to the ring. The exact syntax you use to add a node to your swift cluster with <code>swift-ring-builder</code> heavily depends on the original options used when you originally created your cluster. Please refer back to those commands."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:691(title)
msgid "Replacing a Swift Disk"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:693(para)
msgid "If a hard drive fails in an Object Storage node, replacing it is relatively easy. This assumes that your Object Storage environment is configured correctly, where the data that is stored on the failed drive is also replicated to other drives in the Object Storage environment.<indexterm class=\"singular\"><primary>hard drives, replacing</primary></indexterm><indexterm class=\"singular\"><primary>maintenance/debugging</primary><secondary>swift disk replacement</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:705(para)
msgid "This example assumes that <code>/dev/sdb</code> has failed."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:707(para)
msgid "First, unmount the disk:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:711(para)
msgid "Next, physically remove the disk from the server and replace it with a working disk."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:714(para)
msgid "Ensure that the operating system has recognized the new disk:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:719(para)
msgid "You should see a message about <code>/dev/sdb</code>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:721(para)
msgid "Because it is recommended to not use partitions on a swift disk, simply format the disk as a whole:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:726(para)
msgid "Finally, mount the disk:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:730(para)
msgid "Swift should notice the new disk and that no data exists. It then begins replicating the data to the disk from the other existing replicas."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:739(title)
msgid "Handling a Complete Failure"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:741(para)
msgid "A common way of dealing with the recovery from a full system failure, such as a power outage of a data center, is to assign each service a priority, and restore in order. <xref linkend=\"restor-prior-table\"/> shows an example.<indexterm class=\"singular\"><primary>service restoration</primary></indexterm><indexterm class=\"singular\"><primary>maintenance/debugging</primary><secondary>complete failures</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:754(caption)
msgid "Example service restoration priority list"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:758(th)
msgid "Priority"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:760(th)
msgid "Services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:768(para)
msgid "Internal network connectivity"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:774(para)
msgid "Backing storage services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:778(para)
msgid "3"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:780(para)
msgid "Public network connectivity for user virtual machines"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:787(para)
msgid "<literal>nova-compute</literal>, <literal>nova-network</literal>, cinder hosts"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:792(para)
msgid "5"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:794(para)
msgid "User virtual machines"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:798(para)
msgid "10"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:800(para)
msgid "Message queue and database services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:804(para)
msgid "15"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:806(para)
msgid "Keystone services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:810(para)
msgid "20"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:812(literal)
msgid "cinder-scheduler"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:816(para)
msgid "21"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:818(para)
msgid "Image Catalog and Delivery services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:822(para)
msgid "22"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:824(para)
msgid "<literal>nova-scheduler</literal> services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:828(para)
msgid "98"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:830(literal)
msgid "cinder-api"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:834(para)
msgid "99"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:836(para)
msgid "<literal>nova-api</literal> services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:840(para)
msgid "100"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:842(para)
msgid "Dashboard node"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:847(para)
msgid "Use this example priority list to ensure that user-affected services are restored as soon as possible, but not before a stable environment is in place. Of course, despite being listed as a single-line item, each step requires significant work. For example, just after starting the database, you should check its integrity, or, after starting the nova services, you should verify that the hypervisor matches the database and fix any <phrase role=\"keep-together\">mismatches</phrase>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:861(para)
msgid "Maintaining an OpenStack cloud requires that you manage multiple physical servers, and this number might grow over time. Because managing nodes manually is error prone, we strongly recommend that you use a configuration-management tool. These tools automate the process of ensuring that all your nodes are configured properly and encourage you to maintain your configuration information (such as packages and configuration options) in a version-controlled repository.<indexterm class=\"singular\"><primary>configuration management</primary></indexterm><indexterm class=\"singular\"><primary>networks</primary><secondary>configuration management</secondary></indexterm><indexterm class=\"singular\"><primary>maintenance/debugging</primary><secondary>configuration management</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:881(para)
msgid "Several configuration-management tools are available, and this guide does not recommend a specific one. The two most popular ones in the OpenStack community are <link href=\"https://puppetlabs.com/\">Puppet</link>, with available <link href=\"https://github.com/puppetlabs/puppetlabs-openstack\">OpenStack Puppet modules</link>; and <link href=\"http://www.getchef.com/chef/\">Chef</link>, with available <link href=\"https://github.com/opscode/openstack-chef-repo\">OpenStack Chef recipes</link>. Other newer configuration tools include <link href=\"https://juju.ubuntu.com/\">Juju</link>, <link href=\"http://www.ansible.com/home\">Ansible</link>, and <link href=\"http://www.saltstack.com/\">Salt</link>; and more mature configuration management tools include <link href=\"http://cfengine.com/\">CFEngine</link> and <link href=\"http://bcfg2.org/\">Bcfg2</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:902(title)
msgid "Working with Hardware"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:904(para)
msgid "As for your initial deployment, you should ensure that all hardware is appropriately burned in before adding it to production. Run software that uses the hardware to its limits—maxing out RAM, CPU, disk, and network. Many options are available, and normally double as benchmark software, so you also get a good idea of the performance of your system.<indexterm class=\"singular\"><primary>hardware</primary><secondary>maintenance/debugging</secondary></indexterm><indexterm class=\"singular\"><primary>maintenance/debugging</primary><secondary>hardware</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:922(title)
msgid "Adding a Compute Node"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:924(para)
msgid "If you find that you have reached or are reaching the capacity limit of your computing resources, you should plan to add additional compute nodes. Adding more nodes is quite easy. The process for adding compute nodes is the same as when the initial compute nodes were deployed to your cloud: use an automated deployment system to bootstrap the bare-metal server with the operating system and then have a configuration-management system install and configure OpenStack Compute. Once the Compute Service has been installed and configured in the same way as the other compute nodes, it automatically attaches itself to the cloud. The cloud controller notices the new node(s) and begins scheduling instances to launch there.<indexterm class=\"singular\"><primary>cloud controllers</primary><secondary>new compute nodes and</secondary></indexterm><indexterm class=\"singular\"><primary>nodes</primary><secondary>adding</secondary></indexterm><indexterm class=\"singular\"><primary>compute nodes</primary><secondary>adding</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:948(para)
msgid "If your OpenStack Block Storage nodes are separate from your compute nodes, the same procedure still applies because the same queuing and polling system is used in both services."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:952(para)
msgid "We recommend that you use the same hardware for new compute and block storage nodes. At the very least, ensure that the CPUs are similar in the compute nodes to not break live migration."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:960(title)
msgid "Adding an Object Storage Node"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:962(para)
msgid "Adding a new object storage node is different from adding compute or block storage nodes. You still want to initially configure the server by using your automated deployment and configuration-management systems. After that is done, you need to add the local disks of the object storage node into the object storage ring. The exact command to do this is the same command that was used to add the initial disks to the ring. Simply rerun this command on the object storage proxy server for all disks on the new object storage node. Once this has been done, rebalance the ring and copy the resulting ring files to the other storage nodes.<indexterm class=\"singular\"><primary>Object Storage</primary><secondary>adding nodes</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:978(para)
msgid "If your new object storage node has a different number of disks than the original nodes have, the command to add the new node is different from the original commands. These parameters vary from environment to environment."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:988(title)
msgid "Replacing Components"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:990(para)
msgid "Failures of hardware are common in large-scale deployments such as an infrastructure cloud. Consider your processes and balance time saving against availability. For example, an Object Storage cluster can easily live with dead disks in it for some period of time if it has sufficient capacity. Or, if your compute installation is not full, you could consider live migrating instances off a host with a RAM failure until you have time to deal with the problem."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1003(title) ./doc/openstack-ops/ch_arch_cloud_controller.xml:51(term)
msgid "Databases"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1005(para)
msgid "Almost all OpenStack components have an underlying database to store persistent information. Usually this database is MySQL. Normal MySQL administration is applicable to these databases. OpenStack does not configure the databases out of the ordinary. Basic administration includes performance tweaking, high availability, backup, recovery, and repairing. For more information, see a standard MySQL administration guide.<indexterm class=\"singular\"><primary>databases</primary><secondary>maintenance/debugging</secondary></indexterm><indexterm class=\"singular\"><primary>maintenance/debugging</primary><secondary>databases</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1021(para)
msgid "You can perform a couple of tricks with the database to either more quickly retrieve information or fix a data inconsistency error—for example, an instance was terminated, but the status was not updated in the database. These tricks are discussed throughout this book."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1029(title)
msgid "Database Connectivity"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1031(para)
msgid "Review the component's configuration file to see how each OpenStack component accesses its corresponding database. Look for either <code>sql_connection</code> or simply <code>connection</code>. The following command uses <code>grep</code> to display the SQL connection string for nova, glance, cinder, and keystone:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1037(emphasis)
msgid "grep -hE \"connection ?=\" /etc/nova/nova.conf /etc/glance/glance-*.conf /etc/cinder/cinder.conf /etc/keystone/keystone.conf"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1045(para)
msgid "The connection strings take this format:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1053(title)
msgid "Performance and Optimizing"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1055(para)
msgid "As your cloud grows, MySQL is utilized more and more. If you suspect that MySQL might be becoming a bottleneck, you should start researching MySQL optimization. The MySQL manual has an entire section dedicated to this topic: <link href=\"http://dev.mysql.com/doc/refman/5.5/en/optimize-overview.html\">Optimization Overview</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1067(title)
msgid "HDWMY"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1069(para)
msgid "Here's a quick list of various to-do items for each hour, day, week, month, and year. Please note that these tasks are neither required nor definitive but helpful ideas:<indexterm class=\"singular\"><primary>maintenance/debugging</primary><secondary>schedule of tasks</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1080(title)
msgid "Hourly"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1084(para)
msgid "Check your monitoring system for alerts and act on them."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1089(para)
msgid "Check your ticket queue for new tickets."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1097(title)
msgid "Daily"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1101(para)
msgid "Check for instances in a failed or weird state and investigate why."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1106(para)
msgid "Check for security patches and apply them as needed."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1114(title)
msgid "Weekly"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1120(para)
msgid "User quotas"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1124(para)
msgid "Disk space"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1128(para)
msgid "Image usage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1132(para)
msgid "Large instances"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1136(para)
msgid "Network usage (bandwidth and IP usage)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1118(para)
msgid "Check cloud usage: <placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1142(para)
msgid "Verify your alert mechanisms are still working."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1150(title)
msgid "Monthly"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1154(para)
msgid "Check usage and trends over the past month."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1158(para)
msgid "Check for user accounts that should be removed."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1162(para)
msgid "Check for operator accounts that should be removed."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1170(title)
msgid "Quarterly"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1174(para)
msgid "Review usage and trends over the past quarter."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1178(para)
msgid "Prepare any quarterly reports on usage and statistics."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1182(para)
msgid "Review and plan any necessary cloud additions."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1186(para)
msgid "Review and plan any major OpenStack upgrades."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1194(title)
msgid "Semiannually"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1198(para)
msgid "Upgrade OpenStack."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1202(para)
msgid "Clean up after an OpenStack upgrade (any unused or new services to be aware of?)."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1212(title)
msgid "Determining Which Component Is Broken"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1214(para)
msgid "OpenStack's collection of different components interact with each other strongly. For example, uploading an image requires interaction from <code>nova-api</code>, <code>glance-api</code>, <code>glance-registry</code>, keystone, and potentially <code>swift-proxy</code>. As a result, it is sometimes difficult to determine exactly where problems lie. Assisting in this is the purpose of this section.<indexterm class=\"singular\"><primary>logging/monitoring</primary><secondary>tailing logs</secondary></indexterm><indexterm class=\"singular\"><primary>maintenance/debugging</primary><secondary>determining component affected</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1233(title)
msgid "Tailing Logs"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1235(para)
msgid "The first place to look is the log file related to the command you are trying to run. For example, if <code>nova list</code> is failing, try tailing a nova log file and running the command again:<indexterm class=\"singular\"><primary>tailing logs</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1242(para) ./doc/openstack-ops/ch_ops_maintenance.xml:1257(para)
msgid "Terminal 1:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1246(para) ./doc/openstack-ops/ch_ops_maintenance.xml:1261(para)
msgid "Terminal 2:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1250(para)
msgid "Look for any errors or traces in the log file. For more information, see <xref linkend=\"logging_monitoring\"/>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1253(para)
msgid "If the error indicates that the problem is with another component, switch to tailing that component's log file. For example, if nova cannot access glance, look at the <literal>glance-api</literal> log:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1265(para)
msgid "Wash, rinse, and repeat until you find the core cause of the problem."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1272(title)
msgid "Running Daemons on the CLI"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1274(para)
msgid "Unfortunately, sometimes the error is not apparent from the log files. In this case, switch tactics and use a different command; maybe run the service directly on the command line. For example, if the <code>glance-api</code> service refuses to start and stay running, try launching the daemon from the command line:<indexterm class=\"singular\"><primary>daemons</primary><secondary>running on CLI</secondary></indexterm><indexterm class=\"singular\"><primary>Command-line interface (CLI)</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1289(para)
msgid "The <literal>-H</literal> flag is required when running the daemons with sudo because some daemons will write files relative to the user's home directory, and this write may fail if <literal>-H</literal> is left off."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1288(para)
msgid "This might print the error and cause of the problem.<placeholder-1/>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1296(title)
msgid "Example of Complexity"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1298(para)
msgid "One morning, a compute node failed to run any instances. The log files were a bit vague, claiming that a certain instance was unable to be started. This ended up being a red herring because the instance was simply the first instance in alphabetical order, so it was the first instance that <literal>nova-compute</literal> would touch."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1304(para)
msgid "Further troubleshooting showed that libvirt was not running at all. This made more sense. If libvirt wasn't running, then no instance could be virtualized through KVM. Upon trying to start libvirt, it would silently die immediately. The libvirt logs did not explain why."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1310(para)
msgid "Next, the <code>libvirtd</code> daemon was run on the command line. Finally a helpful error message: it could not connect to d-bus. As ridiculous as it sounds, libvirt, and thus <code>nova-compute</code>, relies on d-bus and somehow d-bus crashed. Simply starting d-bus set the entire chain back on track, and soon everything was back up and running."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1325(title)
msgid "Uninstalling"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1327(para)
msgid "While we'd always recommend using your automated deployment system to reinstall systems from scratch, sometimes you do need to remove OpenStack from a system the hard way. Here's how:<indexterm class=\"singular\"><primary>uninstall operation</primary></indexterm><indexterm class=\"singular\"><primary>maintenance/debugging</primary><secondary>uninstalling</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1340(para)
msgid "Remove all packages."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1344(para)
msgid "Remove remaining files."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1348(para)
msgid "Remove databases."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1352(para)
msgid "These steps depend on your underlying distribution, but in general you should be looking for \"purge\" commands in your package manager, like <literal>aptitude purge ~c $package</literal>. Following this, you can look for orphaned files in the directories referenced throughout this guide. To uninstall the database properly, refer to the manual appropriate for the product in use.<indexterm class=\"endofrange\" startref=\"maindebug\"/>"
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:19(title)
msgid "Acknowledgments"
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:20(para)
msgid "The OpenStack Foundation supported the creation of this book with plane tickets to Austin, lodging (including one adventurous evening without power after a windstorm), and delicious food. For about USD $10,000, we could collaborate intensively for a week in the same room at the Rackspace Austin office. The authors are all members of the OpenStack Foundation, which you can join. Go to the <link href=\"https://www.openstack.org/join\">Foundation web site</link> at <uri>http://openstack.org/join</uri>."
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:28(para)
msgid "We want to acknowledge our excellent host Rackers at Rackspace in Austin:"
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:32(para)
msgid "Emma Richards of Rackspace Guest Relations took excellent care of our lunch orders and even set aside a pile of sticky notes that had fallen off the walls."
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:37(para)
msgid "Betsy Hagemeier, a Fanatical Executive Assistant, took care of a room reshuffle and helped us settle in for the week."
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:41(para)
msgid "The Real Estate team at Rackspace in Austin, also known as \"The Victors,\" were super responsive."
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:45(para)
msgid "Adam Powell in Racker IT supplied us with bandwidth each day and second monitors for those of us needing more screens."
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:49(para)
msgid "On Wednesday night we had a fun happy hour with the Austin OpenStack Meetup group and Racker Katie Schmidt took great care of our group."
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:54(para)
msgid "We also had some excellent input from outside of the room:"
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:57(para)
msgid "Tim Bell from CERN gave us feedback on the outline before we started and reviewed it mid-week."
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:61(para)
msgid "Sébastien Han has written excellent blogs and generously gave his permission for re-use."
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:65(para)
msgid "Oisin Feeley read it, made some edits, and provided emailed feedback right when we asked."
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:69(para)
msgid "Inside the book sprint room with us each day was our book sprint facilitator Adam Hyde. Without his tireless support and encouragement, we would have thought a book of this scope was impossible in five days. Adam has proven the book sprint method effectively again and again. He creates both tools and faith in collaborative authoring at <link href=\"http://www.booksprints.net/\">www.booksprints.net</link>."
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:77(para)
msgid "We couldn't have pulled it off without so much supportive help and encouragement."
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/ch_arch_provision.xml:156(None)
msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_0201.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:12(title)
msgid "Provisioning and Deployment"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:14(para)
msgid "A critical part of a cloud's scalability is the amount of effort that it takes to run your cloud. To minimize the operational cost of running your cloud, set up and use an automated deployment and configuration infrastructure with a configuration management system, such as Puppet or Chef. Combined, these systems greatly reduce manual effort and the chance for operator error.<indexterm class=\"singular\"><primary>cloud computing</primary><secondary>minimizing costs of</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:25(para)
msgid "This infrastructure includes systems to automatically install the operating system's initial configuration and later coordinate the configuration of all services automatically and centrally, which reduces both manual effort and the chance for error. Examples include Ansible, Chef, Puppet, and Salt. You can even use OpenStack to deploy OpenStack, fondly named TripleO, for OpenStack On OpenStack.<indexterm class=\"singular\"><primary>Puppet</primary></indexterm><indexterm class=\"singular\"><primary>Chef</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:37(title)
msgid "Automated Deployment"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:39(para)
msgid "An automated deployment system installs and configures operating systems on new servers, without intervention, after the absolute minimum amount of manual work, including physical racking, MAC-to-IP assignment, and power configuration. Typically, solutions rely on wrappers around PXE boot and TFTP servers for the basic operating system install and then hand off to an automated configuration management system.<indexterm class=\"singular\"><primary>deployment</primary><see>provisioning/deployment</see></indexterm><indexterm class=\"singular\"><primary>provisioning/deployment</primary><secondary>automated deployment</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:55(para)
msgid "Both Ubuntu and Red Hat Enterprise Linux include mechanisms for configuring the operating system, including preseed and kickstart, that you can use after a network boot. Typically, these are used to bootstrap an automated configuration system. Alternatively, you can use an image-based approach for deploying the operating system, such as systemimager. You can use both approaches with a virtualized infrastructure, such as when you run VMs to separate your control services and physical infrastructure."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:64(para)
msgid "When you create a deployment plan, focus on a few vital areas because they are very hard to modify post deployment. The next two sections talk about configurations for:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:70(para)
msgid "Disk partitioning and disk array setup for scalability"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:74(para)
msgid "Networking configuration just for PXE booting"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:79(title)
msgid "Disk Partitioning and RAID"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:81(para)
msgid "At the very base of any operating system are the hard drives on which the operating system (OS) is installed.<indexterm class=\"singular\"><primary>RAID (redundant array of independent disks)</primary></indexterm><indexterm class=\"singular\"><primary>partitions</primary><secondary>disk partitioning</secondary></indexterm><indexterm class=\"singular\"><primary>disk partitioning</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:92(para)
msgid "You must complete the following configurations on the server's hard drives:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:97(para)
msgid "Partitioning, which provides greater flexibility for layout of operating system and swap space, as described below."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:102(para)
msgid "Adding to a RAID array (RAID stands for redundant array of independent disks), based on the number of disks you have available, so that you can add capacity as your cloud grows. Some options are described in more detail below."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:109(para)
msgid "The simplest option to get started is to use one hard drive with two partitions:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:114(para)
msgid "File system to store files and directories, where all the data lives, including the root partition that starts and runs the system"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:120(para)
msgid "Swap space to free up memory for processes, as an independent area of the physical disk used only for swapping and nothing else"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:126(para)
msgid "RAID is not used in this simplistic one-drive setup because generally for production clouds, you want to ensure that if one disk fails, another can take its place. Instead, for production, use more than one disk. The number of disks determine what types of RAID arrays to build."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:132(para)
msgid "We recommend that you choose one of the following multiple disk options:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:137(term)
msgid "Option 1"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:140(para)
msgid "Partition all drives in the same way in a horizontal fashion, as shown in <xref linkend=\"disk_partition_figure\"/>."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:144(para)
msgid "With this option, you can assign different partitions to different RAID arrays. You can allocate partition 1 of disk one and two to the <code>/boot</code> partition mirror. You can make partition 2 of all disks the root partition mirror. You can use partition 3 of all disks for a <code>cinder-volumes</code> LVM partition running on a RAID 10 array."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:152(title)
msgid "Partition setup of drives"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:161(para)
msgid "While you might end up with unused partitions, such as partition 1 in disk three and four of this example, this option allows for maximum utilization of disk space. I/O performance might be an issue as a result of all disks being used for all tasks."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:170(term)
msgid "Option 2"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:173(para)
msgid "Add all raw disks to one large RAID array, either hardware or software based. You can partition this large array with the boot, root, swap, and LVM areas. This option is simple to implement and uses all partitions. However, disk I/O might suffer."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:182(term)
msgid "Option 3"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:185(para)
msgid "Dedicate entire disks to certain partitions. For example, you could allocate disk one and two entirely to the boot, root, and swap partitions under a RAID 1 mirror. Then, allocate disk three and four entirely to the LVM partition, also under a RAID 1 mirror. Disk I/O should be better because I/O is focused on dedicated tasks. However, the LVM partition is much smaller."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:197(para)
msgid "You may find that you can automate the partitioning itself. For example, MIT uses <link href=\"http://fai-project.org/\">Fully Automatic Installation (FAI)</link> to do the initial PXE-based partition and then install using a combination of min/max and percentage-based partitioning.<indexterm class=\"singular\"><primary>Fully Automatic Installation (FAI)</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:206(para)
msgid "As with most architecture choices, the right answer depends on your environment. If you are using existing hardware, you know the disk density of your servers and can determine some decisions based on the options above. If you are going through a procurement process, your user's requirements also help you determine hardware purchases. Here are some examples from a private cloud providing web developers custom environments at AT&amp;T. This example is from a specific deployment, so your existing hardware or procurement opportunity may vary from this. AT&amp;T uses three types of hardware in its deployment:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:218(para)
msgid "Hardware for controller nodes, used for all stateless OpenStack API services. About 3264 GB memory, small attached disk, one processor, varied number of cores, such as 612."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:224(para)
msgid "Hardware for compute nodes. Typically 256 or 144 GB memory, two processors, 24 cores. 46 TB direct attached storage, typically in a RAID 5 configuration."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:230(para)
msgid "Hardware for storage nodes. Typically for these, the disk space is optimized for the lowest cost per GB of storage while maintaining rack-space efficiency."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:236(para)
msgid "Again, the right answer depends on your environment. You have to make your decision based on the trade-offs between space utilization, simplicity, and I/O performance."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:242(title)
msgid "Network Configuration"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:244(para)
msgid "Network configuration is a very large topic that spans multiple areas of this book. For now, make sure that your servers can PXE boot and successfully communicate with the deployment server.<indexterm class=\"singular\"><primary>networks</primary><secondary>configuration of</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:253(para)
msgid "For example, you usually cannot configure NICs for VLANs when PXE booting. Additionally, you usually cannot PXE boot with bonded NICs. If you run into this scenario, consider using a simple 1 GB switch in a private network on which only your cloud communicates."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:261(title)
msgid "Automated Configuration"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:263(para)
msgid "The purpose of automatic configuration management is to establish and maintain the consistency of a system without using human intervention. You want to maintain consistency in your deployments so that you can have the same cloud every time, repeatably. Proper use of automatic configuration-management tools ensures that components of the cloud systems are in particular states, in addition to simplifying deployment, and configuration change propagation.<indexterm class=\"singular\"><primary>automated configuration</primary></indexterm><indexterm class=\"singular\"><primary>provisioning/deployment</primary><secondary>automated configuration</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:277(para)
msgid "These tools also make it possible to test and roll back changes, as they are fully repeatable. Conveniently, a large body of work has been done by the OpenStack community in this space. Puppet, a configuration management tool, even provides official modules for OpenStack in an OpenStack infrastructure system known as <link href=\"https://github.com/stackforge/puppet-openstack\">Stackforge</link>. Chef configuration management is provided within <link role=\"orm:hideurl:ital\" href=\"https://github.com/stackforge/openstack-chef-repo\"/>. Additional configuration management systems include Juju, Ansible, and Salt. Also, PackStack is a command-line utility for Red Hat Enterprise Linux and derivatives that uses Puppet modules to support rapid deployment of OpenStack on existing servers over an SSH connection.<indexterm class=\"singular\"><primary>Stackforge</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:293(para)
msgid "An integral part of a configuration-management system is the items that it controls. You should carefully consider all of the items that you want, or do not want, to be automatically managed. For example, you may not want to automatically format hard drives with user data."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:300(title)
msgid "Remote Management"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:302(para)
msgid "In our experience, most operators don't sit right next to the servers running the cloud, and many don't necessarily enjoy visiting the data center. OpenStack should be entirely remotely configurable, but sometimes not everything goes according to plan.<indexterm class=\"singular\"><primary>provisioning/deployment</primary><secondary>remote management</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:312(para)
msgid "In this instance, having an out-of-band access into nodes running OpenStack components is a boon. The IPMI protocol is the de facto standard here, and acquiring hardware that supports it is highly recommended to achieve that lights-out data center aim."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:317(para)
msgid "In addition, consider remote power control as well. While IPMI usually controls the server's power state, having remote access to the PDU that the server is plugged into can really be useful for situations when everything seems wedged."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:324(title)
msgid "Parting Thoughts for Provisioning and Deploying OpenStack"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:326(para)
msgid "You can save time by understanding the use cases for the cloud you want to create. Use cases for OpenStack are varied. Some include object storage only; others require preconfigured compute resources to speed development-environment set up; and others need fast provisioning of compute resources that are already secured per tenant with private networks. Your users may have need for highly redundant servers to make sure their legacy applications continue to run. Perhaps a goal would be to architect these legacy applications so that they run on multiple instances in a cloudy, fault-tolerant way, but not make it a goal to add to those clusters over time. Your users may indicate that they need scaling considerations because of heavy Windows server use.<indexterm class=\"singular\"><primary>provisioning/deployment</primary><secondary>tips for</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:343(para)
msgid "You can save resources by looking at the best fit for the hardware you have in place already. You might have some high-density storage hardware available. You could format and repurpose those servers for OpenStack Object Storage. All of these considerations and input from users help you build your use case and your deployment plan."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:350(para)
msgid "For further research about OpenStack deployment, investigate the supported and documented preconfigured, prepackaged installers for OpenStack from companies such as <link href=\"http://www.ubuntu.com/cloud/ubuntu-openstack\">Canonical</link>, <link href=\"http://www.cisco.com/web/solutions/openstack/index.html\">Cisco</link>, <link href=\"http://www.cloudscaling.com/\">Cloudscaling</link>, <link href=\"http://www-03.ibm.com/software/products/en/smartcloud-orchestrator/\">IBM</link>, <link href=\"http://www.metacloud.com/\">Metacloud</link>, <link href=\"http://www.mirantis.com/\">Mirantis</link>, <link href=\"http://www.pistoncloud.com/\">Piston</link>, <link href=\"http://www.rackspace.com/cloud/private/\">Rackspace</link>, <link href=\"http://www.redhat.com/openstack/\">Red Hat</link>, <link href=\"https://www.suse.com/products/suse-cloud/\">SUSE</link>, and <link href=\"https://www.swiftstack.com/\">SwiftStack</link>."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:370(para)
msgid "The decisions you make with respect to provisioning and deployment will affect your day-to-day, week-to-week, and month-to-month maintenance of the cloud. Your configuration management will be able to evolve over time. However, more thought and design need to be done for upfront choices about deployment, disk partitioning, and network configuration."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:12(title)
msgid "Designing for Cloud Controllers and <phrase role=\"keep-together\">Cloud Management</phrase>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:15(para)
msgid "OpenStack is designed to be massively horizontally scalable, which allows all services to be distributed widely. However, to simplify this guide, we have decided to discuss services of a more central nature, using the concept of a <emphasis>cloud controller</emphasis>. A cloud controller is just a conceptual simplification. In the real world, you design an architecture for your cloud controller that enables high availability so that if any node fails, another can take over the required tasks. In reality, cloud controller tasks are spread out across more than a single node.<indexterm class=\"singular\"><primary>design considerations</primary><secondary>cloud controller services</secondary></indexterm><indexterm class=\"singular\"><primary>cloud controllers</primary><secondary>concept of</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:33(para)
msgid "The cloud controller provides the central management system for OpenStack deployments. Typically, the cloud controller manages authentication and sends messaging to all the systems through a message queue."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:38(para)
msgid "For many deployments, the cloud controller is a single node. However, to have high availability, you have to take a few considerations into account, which we'll cover in this chapter."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:42(para)
msgid "The cloud controller manages the following services for the cloud:<indexterm class=\"singular\"><primary>cloud controllers</primary><secondary>services managed by</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:54(para)
msgid "Tracks current information about users and instances, for example, in a database, typically one database instance managed per service"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:61(term)
msgid "Message queue services"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:64(para)
msgid "All AMQP—Advanced Message Queue Protocol—messages for services are received and sent according to the queue broker<indexterm class=\"singular\"><primary>Advanced Message Queuing Protocol (AMQP)</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:73(term)
msgid "Conductor services"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:76(para)
msgid "Proxy requests to a database"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:81(term)
msgid "Authentication and authorization for identity management"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:84(para)
msgid "Indicates which users can do what actions on certain cloud resources; quota management is spread out among services, however<indexterm class=\"singular\"><primary>authentication</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:93(term)
msgid "Image-management services"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:96(para)
msgid "Stores and serves images with metadata on each, for launching in the cloud"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:102(term)
msgid "Scheduling services"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:105(para)
msgid "Indicates which resources to use first; for example, spreading out where instances are launched based on an algorithm"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:111(term)
msgid "User dashboard"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:114(para)
msgid "Provides a web-based frontend for users to consume OpenStack cloud services"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:120(term)
msgid "API endpoints"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:123(para)
msgid "Offers each service's REST API access, where the API endpoint catalog is managed by the Identity Service"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:129(para)
msgid "For our example, the cloud controller has a collection of <code>nova-*</code> components that represent the global state of the cloud; talks to services such as authentication; maintains information about the cloud in a database; communicates to all compute nodes and storage <glossterm>worker</glossterm>s through a queue; and provides API access. Each service running on a designated cloud controller may be broken out into separate nodes for scalability or availability.<indexterm class=\"singular\"><primary>storage</primary><secondary>storage workers</secondary></indexterm><indexterm class=\"singular\"><primary>workers</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:143(para)
msgid "As another example, you could use pairs of servers for a collective cloud controller—one active, one standby—for redundant nodes providing a given set of related services, such as:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:149(para)
msgid "Frontend web for API requests, the scheduler for choosing which compute node to boot an instance on, Identity services, and the dashboard"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:155(para)
msgid "Database and message queue server (such as MySQL, RabbitMQ)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:159(para)
msgid "Image service for the image management"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:163(para)
msgid "Now that you see the myriad designs for controlling your cloud, read more about the further considerations to help with your design decisions."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:168(title)
msgid "Hardware Considerations"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:170(para)
msgid "A cloud controller's hardware can be the same as a compute node, though you may want to further specify based on the size and type of cloud that you run.<indexterm class=\"singular\"><primary>hardware</primary><secondary>design considerations</secondary></indexterm><indexterm class=\"singular\"><primary>design considerations</primary><secondary>hardware considerations</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:182(para)
msgid "It's also possible to use virtual machines for all or some of the services that the cloud controller manages, such as the message queuing. In this guide, we assume that all services are running directly on the cloud controller."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:187(para)
msgid "<xref linkend=\"controller-hardware-sizing\"/> contains common considerations to review when sizing hardware for the cloud controller design.<indexterm class=\"singular\"><primary>cloud controllers</primary><secondary>hardware sizing considerations</secondary></indexterm><indexterm class=\"singular\"><primary>Active Directory</primary></indexterm><indexterm class=\"singular\"><primary>dashboard</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:200(caption)
msgid "Cloud controller hardware sizing considerations"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:208(th)
msgid "Consideration"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:210(th)
msgid "Ramification"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:216(para)
msgid "How many instances will run at once?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:218(para)
msgid "Size your database server accordingly, and scale out beyond one cloud controller if many instances will report status at the same time and scheduling where a new instance starts up needs computing power."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:225(para)
msgid "How many compute nodes will run at once?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:227(para)
msgid "Ensure that your messaging queue handles requests successfully and size accordingly."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:232(para)
msgid "How many users will access the API?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:234(para)
msgid "If many users will make multiple requests, make sure that the CPU load for the cloud controller can handle it."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:239(para)
msgid "How many users will access the <glossterm>dashboard</glossterm> versus the REST API directly?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:243(para)
msgid "The dashboard makes many requests, even more than the API access, so add even more CPU if your dashboard is the main interface for your users."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:249(para)
msgid "How many <code>nova-api</code> services do you run at once for your cloud?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:252(para)
msgid "You need to size the controller with a core per service."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:257(para)
msgid "How long does a single instance run?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:259(para)
msgid "Starting instances and deleting instances is demanding on the compute node but also demanding on the controller node because of all the API queries and scheduling needs."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:265(para)
msgid "Does your authentication system also verify externally?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:268(para)
msgid "External systems such as LDAP or <glossterm>Active Directory</glossterm> require network connectivity between the cloud controller and an external authentication system. Also ensure that the cloud controller has the CPU power to keep up with requests."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:279(title)
msgid "Separation of Services"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:281(para)
msgid "While our example contains all central services in a single location, it is possible and indeed often a good idea to separate services onto different physical servers. <xref linkend=\"sep-services-table\"/> is a list of deployment scenarios we've seen and their justifications.<indexterm class=\"singular\"><primary>provisioning/deployment</primary><secondary>deployment scenarios</secondary></indexterm><indexterm class=\"singular\"><primary>services</primary><secondary>separation of</secondary></indexterm><indexterm class=\"singular\"><primary>separation of services</primary></indexterm><indexterm class=\"singular\"><primary>design considerations</primary><secondary>separation of services</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:302(caption)
msgid "Deployment scenarios"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:310(th)
msgid "Scenario"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:312(th)
msgid "Justification"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:318(para)
msgid "Run <code>glance-*</code> servers on the <code>swift-proxy</code> server."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:321(para)
msgid "This deployment felt that the spare I/O on the Object Storage proxy server was sufficient and that the Image Delivery portion of glance benefited from being on physical hardware and having good connectivity to the Object Storage backend it was using."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:329(para)
msgid "Run a central dedicated database server."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:331(para)
msgid "This deployment used a central dedicated server to provide the databases for all services. This approach simplified operations by isolating database server updates and allowed for the simple creation of slave database servers for failover."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:338(para)
msgid "Run one VM per service."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:340(para)
msgid "This deployment ran central services on a set of servers running KVM. A dedicated VM was created for each service (<literal>nova-scheduler</literal>, rabbitmq, database, etc). This assisted the deployment with scaling because administrators could tune the resources given to each virtual machine based on the load it received (something that was not well understood during installation)."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:350(para)
msgid "Use an external load balancer."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:352(para)
msgid "This deployment had an expensive hardware load balancer in its organization. It ran multiple <code>nova-api</code> and <code>swift-proxy</code> servers on different physical servers and used the load balancer to switch between them."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:360(para)
msgid "One choice that always comes up is whether to virtualize. Some services, such as <code>nova-compute</code>, <code>swift-proxy</code> and <code>swift-object</code> servers, should not be virtualized. However, control servers can often be happily virtualized—the performance penalty can usually be offset by simply running more of the service."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:370(para)
msgid "OpenStack Compute uses a SQL database to store and retrieve stateful information. MySQL is the popular database choice in the OpenStack community.<indexterm class=\"singular\"><primary>databases</primary><secondary>design considerations</secondary></indexterm><indexterm class=\"singular\"><primary>design considerations</primary><secondary>database choice</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:382(para)
msgid "Loss of the database leads to errors. As a result, we recommend that you cluster your database to make it failure tolerant. Configuring and maintaining a database cluster is done outside OpenStack and is determined by the database software you choose to use in your cloud environment. MySQL/Galera is a popular option for MySQL-based databases."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:390(title)
msgid "Message Queue"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:392(para)
msgid "Most OpenStack services communicate with each other using the <emphasis>message queue</emphasis>.<indexterm class=\"singular\"><primary>messages</primary><secondary>design considerations</secondary></indexterm><indexterm class=\"singular\"><primary>design considerations</primary><secondary>message queues</secondary></indexterm> For example, Compute communicates to block storage services and networking services through the message queue. Also, you can optionally enable notifications for any service. RabbitMQ, Qpid, and 0mq are all popular choices for a message-queue service. In general, if the message queue fails or becomes inaccessible, the cluster grinds to a halt and ends up in a read-only state, with information stuck at the point where the last message was sent. Accordingly, we recommend that you cluster the message queue. Be aware that clustered message queues can be a pain point for many OpenStack deployments. While RabbitMQ has native clustering support, there have been reports of issues when running it at a large scale. While other queuing solutions are available, such as 0mq and Qpid, 0mq does not offer stateful queues. Qpid is the <phrase role=\"keep-together\">messaging</phrase> system of choice for Red Hat and its derivatives. Qpid does not have native clustering capabilities and requires a supplemental service, such as Pacemaker or Corsync. For your message queue, you need to determine what level of data loss you are comfortable with and whether to use an OpenStack project's ability to retry multiple MQ hosts in the event of a failure, such as using Compute's ability to do so.<indexterm class=\"singular\"><primary>0mq</primary></indexterm><indexterm class=\"singular\"><primary>Qpid</primary></indexterm><indexterm class=\"singular\"><primary>RabbitMQ</primary></indexterm><indexterm class=\"singular\"><primary>message queue</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:431(title)
msgid "Conductor Services"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:433(para)
msgid "In the previous version of OpenStack, all <literal>nova-compute</literal> services required direct access to the database hosted on the cloud controller. This was problematic for two reasons: security and performance. With regard to security, if a compute node is compromised, the attacker inherently has access to the database. With regard to performance, <literal>nova-compute</literal> calls to the database are single-threaded and blocking. This creates a performance bottleneck because database requests are fulfilled serially rather than in parallel.<indexterm class=\"singular\"><primary>conductors</primary></indexterm><indexterm class=\"singular\"><primary>design considerations</primary><secondary>conductor services</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:449(para)
msgid "The conductor service resolves both of these issues by acting as a proxy for the <literal>nova-compute</literal> service. Now, instead of <literal>nova-compute</literal> directly accessing the database, it contacts the <literal>nova-conductor</literal> service, and <literal>nova-conductor</literal> accesses the database on <literal>nova-compute</literal>'s behalf. Since <literal>nova-compute</literal> no longer has direct access to the database, the security issue is resolved. Additionally, <literal>nova-conductor</literal> is a nonblocking service, so requests from all compute nodes are fulfilled in parallel."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:461(para)
msgid "If you are using <literal>nova-network</literal> and multi-host networking in your cloud environment, <literal>nova-compute</literal> still requires direct access to the database.<indexterm class=\"singular\"><primary>multi-host networking</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:468(para)
msgid "The <literal>nova-conductor</literal> service is horizontally scalable. To make <literal>nova-conductor</literal> highly available and fault tolerant, just launch more instances of the <code>nova-conductor</code> process, either on the same server or across multiple servers."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:476(title)
msgid "Application Programming Interface (API)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:478(para)
msgid "All public access, whether direct, through a command-line client, or through the web-based dashboard, uses the API service. Find the API reference at <link href=\"http://api.openstack.org/\"/>.<indexterm class=\"singular\"><primary>API (application programming interface)</primary><secondary>design considerations</secondary></indexterm><indexterm class=\"singular\"><primary>design considerations</primary><secondary>API support</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:491(para)
msgid "You must choose whether you want to support the Amazon EC2 compatibility APIs, or just the OpenStack APIs. One issue you might encounter when running both APIs is an inconsistent experience when referring to images and instances."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:496(para)
msgid "For example, the EC2 API refers to instances using IDs that contain hexadecimal, whereas the OpenStack API uses names and digits. Similarly, the EC2 API tends to rely on DNS aliases for contacting virtual machines, as opposed to OpenStack, which typically lists IP addresses.<indexterm class=\"singular\"><primary>DNS (Domain Name Server, Service or System)</primary><secondary>DNS aliases</secondary></indexterm><indexterm class=\"singular\"><primary>troubleshooting</primary><secondary>DNS issues</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:510(para)
msgid "If OpenStack is not set up in the right way, it is simple to have scenarios in which users are unable to contact their instances due to having only an incorrect DNS alias. Despite this, EC2 compatibility can assist users migrating to your cloud."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:515(para)
msgid "As with databases and message queues, having more than one <glossterm>API server</glossterm> is a good thing. Traditional HTTP load-balancing techniques can be used to achieve a highly available <code>nova-api</code> service.<indexterm class=\"singular\"><primary>API (application programming interface)</primary><secondary>API server</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:526(title)
msgid "Extensions"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:528(para)
msgid "The <link href=\"http://docs.openstack.org/api/api-specs.html\" title=\"API Specifications\">API Specifications</link> define the core actions, capabilities, and mediatypes of the OpenStack API. A client can always depend on the availability of this core API, and implementers are always required to support it in its <phrase role=\"keep-together\">entirety</phrase>. Requiring strict adherence to the core API allows clients to rely upon a minimal level of functionality when interacting with multiple implementations of the same API.<indexterm class=\"singular\"><primary>extensions</primary><secondary>design considerations</secondary></indexterm><indexterm class=\"singular\"><primary>design considerations</primary><secondary>extensions</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:546(para)
msgid "The OpenStack Compute API is extensible. An extension adds capabilities to an API beyond those defined in the core. The introduction of new features, MIME types, actions, states, headers, parameters, and resources can all be accomplished by means of extensions to the core API. This allows the introduction of new features in the API without requiring a version change and allows the introduction of vendor-specific niche functionality."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:556(title)
msgid "Scheduling"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:558(para)
msgid "The scheduling services are responsible for determining the compute or storage node where a virtual machine or block storage volume should be created. The scheduling services receive creation requests for these resources from the message queue and then begin the process of determining the appropriate node where the resource should reside. This process is done by applying a series of user-configurable filters against the available collection of nodes.<indexterm class=\"singular\"><primary>schedulers</primary><secondary>design considerations</secondary></indexterm><indexterm class=\"singular\"><primary>design considerations</primary><secondary>scheduling</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:574(para)
msgid "There are currently two schedulers: <literal>nova-scheduler</literal> for virtual machines and <literal>cinder-scheduler</literal> for block storage volumes. Both schedulers are able to scale horizontally, so for high-availability purposes, or for very large or high-schedule-frequency installations, you should consider running multiple instances of each scheduler. The schedulers all listen to the shared message queue, so no special load balancing is required."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:587(para)
msgid "The OpenStack Image service consists of two parts: <code>glance-api</code> and <code>glance-registry</code>. The former is responsible for the delivery of images; the compute node uses it to download images from the backend. The latter maintains the metadata information associated with virtual machine images and requires a database.<indexterm class=\"singular\"><primary>glance</primary><secondary>glance registry</secondary></indexterm><indexterm class=\"singular\"><primary>glance</primary><secondary>glance API server</secondary></indexterm><indexterm class=\"singular\"><primary>metadata</primary><secondary>OpenStack Image service and</secondary></indexterm><indexterm class=\"singular\"><primary>Image service</primary><secondary>design considerations</secondary></indexterm><indexterm class=\"singular\"><primary>design considerations</primary><secondary>images</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:614(para)
msgid "The <code>glance-api</code> part is an abstraction layer that allows a choice of backend. Currently, it supports:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:619(term)
msgid "OpenStack Object Storage"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:622(para)
msgid "Allows you to store images as objects."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:627(term)
msgid "File system"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:630(para)
msgid "Uses any traditional file system to store the images as files."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:636(term)
msgid "S3"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:639(para)
msgid "Allows you to fetch images from Amazon S3."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:644(term)
msgid "HTTP"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:647(para)
msgid "Allows you to fetch images from a web server. You cannot write images by using this mode."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:653(para)
msgid "If you have an OpenStack Object Storage service, we recommend using this as a scalable place to store your images. You can also use a file system with sufficient performance or Amazon S3—unless you do not need the ability to upload new images through OpenStack."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:662(para)
msgid "The OpenStack dashboard (horizon) provides a web-based user interface to the various OpenStack components. The dashboard includes an end-user area for users to manage their virtual infrastructure and an admin area for cloud operators to manage the OpenStack environment as a whole.<indexterm class=\"singular\"><primary>dashboard</primary></indexterm><indexterm class=\"singular\"><primary>design considerations</primary><secondary>dashboard</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:674(para)
msgid "The dashboard is implemented as a Python web application that normally runs in <glossterm>Apache</glossterm><code>httpd</code>. Therefore, you may treat it the same as any other web application, provided it can reach the API servers (including their admin endpoints) over the <phrase role=\"keep-together\">network</phrase>.<indexterm class=\"singular\"><primary>Apache</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:685(title)
msgid "Authentication and Authorization"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:687(para)
msgid "The concepts supporting OpenStack's authentication and authorization are derived from well-understood and widely used systems of a similar nature. Users have credentials they can use to authenticate, and they can be a member of one or more groups (known as projects or tenants, interchangeably).<indexterm class=\"singular\"><primary>credentials</primary></indexterm><indexterm class=\"singular\"><primary>authorization</primary></indexterm><indexterm class=\"singular\"><primary>authentication</primary></indexterm><indexterm class=\"singular\"><primary>design considerations</primary><secondary>authentication/authorization</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:703(para)
msgid "For example, a cloud administrator might be able to list all instances in the cloud, whereas a user can see only those in his current group. Resources quotas, such as the number of cores that can be used, disk space, and so on, are associated with a project."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:708(para)
msgid "The OpenStack Identity Service (keystone) is the point that provides the authentication decisions and user attribute information, which is then used by the other OpenStack services to perform authorization. Policy is set in the <filename>policy.json</filename> file. For <phrase role=\"keep-together\">information</phrase> on how to configure these, see <xref linkend=\"projects_users\"/>.<indexterm class=\"singular\"><primary>Identity Service</primary><secondary>authentication decisions</secondary></indexterm><indexterm class=\"singular\"><primary>Identity Service</primary><secondary>plug-in support</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:723(para)
msgid "The Identity Service supports different plug-ins for authentication decisions and identity storage. Examples of these plug-ins include:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:728(para)
msgid "In-memory key-value Store (a simplified internal storage structure)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:733(para)
msgid "SQL database (such as MySQL or PostgreSQL)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:737(para)
msgid "Memcached (a distributed memory object caching system)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:741(para)
msgid "LDAP (such as OpenLDAP or Microsoft's Active Directory)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:745(para)
msgid "Many deployments use the SQL database; however, LDAP is also a popular choice for those with existing authentication infrastructure that needs to be integrated."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:751(title)
msgid "Network Considerations"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:753(para)
msgid "Because the cloud controller handles so many different services, it must be able to handle the amount of traffic that hits it. For example, if you choose to host the OpenStack Imaging Service on the cloud controller, the cloud controller should be able to support the transferring of the images at an acceptable speed.<indexterm class=\"singular\"><primary>cloud controllers</primary><secondary>network traffic and</secondary></indexterm><indexterm class=\"singular\"><primary>networks</primary><secondary>design considerations</secondary></indexterm><indexterm class=\"singular\"><primary>design considerations</primary><secondary>networks</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:771(para)
msgid "As another example, if you choose to use single-host networking where the cloud controller is the network gateway for all instances, then the cloud controller must support the total amount of traffic that travels between your cloud and the public Internet."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:776(para)
msgid "We recommend that you use a fast NIC, such as 10 GB. You can also choose to use two 10 GB NICs and bond them together. While you might not be able to get a full bonded 20 GB speed, different transmission streams use different NICs. For example, if the cloud controller transfers two images, each image uses a different NIC and gets a full 10 GB of bandwidth.<indexterm class=\"singular\"><primary>bandwidth</primary><secondary>design considerations for</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:12(title)
msgid "Logging and Monitoring"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:14(para)
msgid "As an OpenStack cloud is composed of so many different services, there are a large number of log files. This chapter aims to assist you in locating and working with them and describes other ways to track the status of your deployment.<indexterm class=\"singular\"><primary>debugging</primary><see>logging/monitoring; maintenance/debugging</see></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:24(title)
msgid "Where Are the Logs?"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:26(para)
msgid "Most services use the convention of writing their log files to subdirectories of the <code>/var/log directory</code>, as listed in <xref linkend=\"openstack-log-locations\"/>.<indexterm class=\"singular\"><primary>cloud controllers</primary><secondary>log information</secondary></indexterm><indexterm class=\"singular\"><primary>logging/monitoring</primary><secondary>log location</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:39(caption)
msgid "OpenStack log locations"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:43(th)
msgid "Node type"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:45(th)
msgid "Service"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:47(th)
msgid "Log location"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:53(para) ./doc/openstack-ops/ch_ops_log_monitor.xml:61(para) ./doc/openstack-ops/ch_ops_log_monitor.xml:69(para) ./doc/openstack-ops/ch_ops_log_monitor.xml:77(para) ./doc/openstack-ops/ch_ops_log_monitor.xml:85(para) ./doc/openstack-ops/ch_ops_log_monitor.xml:93(para)
msgid "Cloud controller"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:55(code)
msgid "nova-*"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:57(code)
msgid "/var/log/nova"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:63(code)
msgid "glance-*"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:65(code)
msgid "/var/log/glance"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:71(code)
msgid "cinder-*"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:73(code)
msgid "/var/log/cinder"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:79(code)
msgid "keystone-*"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:81(code)
msgid "/var/log/keystone"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:87(code)
msgid "neutron-*"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:89(code)
msgid "/var/log/neutron"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:97(code)
msgid "/var/log/apache2/"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:101(para)
msgid "All nodes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:103(para)
msgid "misc (swift, dnsmasq)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:105(code)
msgid "/var/log/syslog"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:111(para)
msgid "libvirt"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:113(code)
msgid "/var/log/libvirt/libvirtd.log"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:119(para)
msgid "Console (boot up messages) for VM instances:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:121(code)
msgid "/var/lib/nova/instances/instance-"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:122(code)
msgid "&lt;instance id&gt;/console.log"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:127(para)
msgid "Block Storage nodes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:129(para)
msgid "cinder-volume"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:131(code)
msgid "/var/log/cinder/cinder-volume.log"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:139(title)
msgid "Reading the Logs"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:141(para)
msgid "OpenStack services use the standard logging levels, at increasing severity: DEBUG, INFO, AUDIT, WARNING, ERROR, CRITICAL, and TRACE. That is, messages only appear in the logs if they are more \"severe\" than the particular log level, with DEBUG allowing all log statements through. For example, TRACE is logged only if the software has a stack trace, while INFO is logged for every message including those that are only for information.<indexterm class=\"singular\"><primary>logging/monitoring</primary><secondary>logging levels</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:153(para)
msgid "To disable DEBUG-level logging, edit <filename>/etc/nova/nova.conf</filename> as follows:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:158(para)
msgid "Keystone is handled a little differently. To modify the logging level, edit the <filename>/etc/keystone/logging.conf</filename> file and look at the <code>logger_root</code> and <code>handler_file</code> sections."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:163(para)
msgid "<phrase role=\"keep-together\">Logging for horizon is configured in <filename>/etc/openstack_dashboard/local_</filename></phrase><filename>settings.py</filename>. Because horizon is a Django web application, it follows the <link href=\"https://docs.djangoproject.com/en/dev/topics/logging/\" title=\"Django Logging\">Django Logging framework conventions</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:169(para)
msgid "The first step in finding the source of an error is typically to search for a CRITICAL, TRACE, or ERROR message in the log starting at the bottom of the log file.<indexterm class=\"singular\"><primary>logging/monitoring</primary><secondary>reading log messages</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:177(para)
msgid "Here is an example of a CRITICAL log message, with the corresponding TRACE (Python traceback) immediately following:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:211(para)
msgid "In this example, <literal>cinder-volumes</literal> failed to start and has provided a stack trace, since its volume backend has been unable to set up the storage volume—probably because the LVM volume that is expected from the configuration does not exist."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:216(para)
msgid "Here is an example error log:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:221(para)
msgid "In this error, a nova service has failed to connect to the RabbitMQ server because it got a connection refused error."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:226(title)
msgid "Tracing Instance Requests"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:228(para)
msgid "When an instance fails to behave properly, you will often have to trace activity associated with that instance across the log files of various <code>nova-*</code> services and across both the cloud controller and compute nodes.<indexterm class=\"singular\"><primary>instances</primary><secondary>tracing instance requests</secondary></indexterm><indexterm class=\"singular\"><primary>logging/monitoring</primary><secondary>tracing instance requests</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:241(para)
msgid "The typical way is to trace the UUID associated with an instance across the service logs."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:244(para)
msgid "Consider the following example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:253(para)
msgid "Here, the ID associated with the instance is <code>faf7ded8-4a46-413b-b113-f19590746ffe</code>. If you search for this string on the cloud controller in the <filename>/var/log/nova-*.log</filename> files, it appears in <filename>nova-api.log</filename> and <filename>nova-scheduler.log</filename>. If you search for this on the compute nodes in <filename>/var/log/nova-*.log</filename>, it appears in <filename>nova-network.log</filename> and <filename>nova-compute.log</filename>. If no ERROR or CRITICAL messages appear, the most recent log entry that reports this may provide a hint about what has gone wrong."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:267(title)
msgid "Adding Custom Logging Statements"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:269(para)
msgid "If there is not enough information in the existing logs, you may need to add your own custom logging statements to the <code>nova-*</code> services.<indexterm class=\"singular\"><primary>customization</primary><secondary>custom log statements</secondary></indexterm><indexterm class=\"singular\"><primary>logging/monitoring</primary><secondary>adding custom log statements</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:281(para)
msgid "The source files are located in <filename>/usr/lib/python2.7/dist-packages/nova</filename>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:284(para)
msgid "To add logging statements, the following line should be near the top of the file. For most files, these should already be there:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:290(para)
msgid "To add a DEBUG logging statement, you would do:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:294(para)
msgid "You may notice that all the existing logging messages are preceded by an underscore and surrounded by parentheses, for example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:299(para)
msgid "This formatting is used to support translation of logging messages into different languages using the <link href=\"https://docs.python.org/2/library/gettext.html\">gettext</link> internationalization library. You don't need to do this for your own custom log messages. However, if you want to contribute the code back to the OpenStack project that includes logging statements, you must surround your log messages with underscores and parentheses."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:309(title)
msgid "RabbitMQ Web Management Interface or rabbitmqctl"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:311(para)
msgid "Aside from connection failures, RabbitMQ log files are generally not useful for debugging OpenStack related issues. Instead, we recommend you use the RabbitMQ web management interface.<indexterm class=\"singular\"><primary>RabbitMQ</primary></indexterm><indexterm class=\"singular\"><primary>logging/monitoring</primary><secondary>RabbitMQ web management interface</secondary></indexterm> Enable it on your cloud controller:<indexterm class=\"singular\"><primary>cloud controllers</primary><secondary>enabling RabbitMQ</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:330(para)
msgid "The RabbitMQ web management interface is accessible on your cloud controller at <emphasis>http://localhost:55672</emphasis>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:334(para)
msgid "Ubuntu 12.04 installs RabbitMQ version 2.7.1, which uses port 55672. RabbitMQ versions 3.0 and above use port 15672 instead. You can check which version of RabbitMQ you have running on your local Ubuntu machine by doing:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:343(para)
msgid "An alternative to enabling the RabbitMQ web management interface is to use the <phrase role=\"keep-together\"><literal>rabbitmqctl</literal></phrase> commands. For example, <literal>rabbitmqctl list_queues| grep cinder</literal> displays any messages left in the queue. If there are messages, it's a possible sign that cinder services didn't connect properly to rabbitmq and might have to be restarted."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:350(para)
msgid "Items to monitor for RabbitMQ include the number of items in each of the queues and the processing time statistics for the server."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:355(title)
msgid "Centrally Managing Logs"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:357(para)
msgid "Because your cloud is most likely composed of many servers, you must check logs on each of those servers to properly piece an event together. A better solution is to send the logs of all servers to a central location so that they can all be accessed from the same area.<indexterm class=\"singular\"><primary>logging/monitoring</primary><secondary>central log management</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:367(para)
msgid "Ubuntu uses rsyslog as the default logging service. Since it is natively able to send logs to a remote location, you don't have to install anything extra to enable this feature, just modify the configuration file. In doing this, consider running your logging over a management network or using an encrypted VPN to avoid interception."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:374(title)
msgid "rsyslog Client Configuration"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:376(para)
msgid "To begin, configure all OpenStack components to log to syslog in addition to their standard log file location. Also configure each component to log to a different syslog facility. This makes it easier to split the logs into individual components on the central server:<indexterm class=\"singular\"><primary>rsyslog</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:384(para)
msgid "<filename>nova.conf</filename>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:389(para)
msgid "<filename>glance-api.conf</filename> and <filename>glance-registry.conf</filename>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:395(para)
msgid "<filename>cinder.conf</filename>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:400(para)
msgid "<filename>keystone.conf</filename>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:405(para)
msgid "By default, Object Storage logs to syslog."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:407(para)
msgid "Next, create <filename>/etc/rsyslog.d/client.conf</filename> with the following line:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:412(para)
msgid "This instructs rsyslog to send all logs to the IP listed. In this example, the IP points to the cloud controller."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:417(title)
msgid "rsyslog Server Configuration"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:419(para)
msgid "Designate a server as the central logging server. The best practice is to choose a server that is solely dedicated to this purpose. Create a file called <filename>/etc/rsyslog.d/server.conf</filename> with the following contents:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:444(para)
msgid "This example configuration handles the nova service only. It first configures rsyslog to act as a server that runs on port 514. Next, it creates a series of logging templates. Logging templates control where received logs are stored. Using the last example, a nova log from c01.example.com goes to the following locations:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:452(filename)
msgid "/var/log/rsyslog/c01.example.com/nova.log"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:456(filename) ./doc/openstack-ops/ch_ops_log_monitor.xml:468(filename)
msgid "/var/log/rsyslog/nova.log"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:460(para)
msgid "This is useful, as logs from c02.example.com go to:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:464(filename)
msgid "/var/log/rsyslog/c02.example.com/nova.log"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:472(para)
msgid "You have an individual log file for each compute node as well as an aggregated log that contains nova logs from all nodes."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:478(title)
msgid "Monitoring"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:480(para)
msgid "There are two types of monitoring: watching for problems and watching usage trends. The former ensures that all services are up and running, creating a functional cloud. The latter involves monitoring resource usage over time in order to make informed decisions about potential bottlenecks and upgrades.<indexterm class=\"singular\"><primary>cloud controllers</primary><secondary>process monitoring and</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:493(title)
msgid "Nagios"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:495(para)
msgid "Nagios is an open source monitoring service. It's capable of executing arbitrary commands to check the status of server and network services, remotely executing arbitrary commands directly on servers, and allowing servers to push notifications back in the form of passive monitoring. Nagios has been around since 1999. Although newer monitoring services are available, Nagios is a tried-and-true systems administration staple.<indexterm class=\"singular\"><primary>Nagios</primary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:507(title)
msgid "Process Monitoring"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:509(para)
msgid "A basic type of alert monitoring is to simply check and see whether a required process is running.<indexterm class=\"singular\"><primary>monitoring</primary><secondary>process monitoring</secondary></indexterm><indexterm class=\"singular\"><primary>process monitoring</primary></indexterm><indexterm class=\"singular\"><primary>logging/monitoring</primary><secondary>process monitoring</secondary></indexterm> For example, ensure that the <code>nova-api</code> service is running on the cloud controller:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:536(para)
msgid "You can create automated alerts for critical processes by using Nagios and NRPE. For example, to ensure that the <code>nova-compute</code> process is running on compute nodes, create an alert on your Nagios server that looks like this:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:550(para)
msgid "Then on the actual compute node, create the following NRPE configuration:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:556(para)
msgid "Nagios checks that at least one <literal>nova-compute</literal> service is running at all times."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:561(title)
msgid "Resource Alerting"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:563(para)
msgid "Resource alerting provides notifications when one or more resources are critically low. While the monitoring thresholds should be tuned to your specific OpenStack environment, monitoring resource usage is not specific to OpenStack at all—any generic type of alert will work fine.<indexterm class=\"singular\"><primary>monitoring</primary><secondary>resource alerting</secondary></indexterm><indexterm class=\"singular\"><primary>alerts</primary><secondary>resource</secondary></indexterm><indexterm class=\"singular\"><primary>resources</primary><secondary>resource alerting</secondary></indexterm><indexterm class=\"singular\"><primary>logging/monitoring</primary><secondary>resource alerting</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:585(para)
msgid "Some of the resources that you want to monitor include:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:589(para)
msgid "Disk usage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:593(para)
msgid "Server load"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:597(para)
msgid "Memory usage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:601(para)
msgid "Network I/O"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:605(para)
msgid "Available vCPUs"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:609(para)
msgid "For example, to monitor disk capacity on a compute node with Nagios, add the following to your Nagios configuration:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:620(para)
msgid "On the compute node, add the following to your NRPE configuration:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:626(para)
msgid "Nagios alerts you with a WARNING when any disk on the compute node is 80 percent full and CRITICAL when 90 percent is full."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:631(title)
msgid "StackTach"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:633(para)
msgid "StackTach is a tool created by Rackspace to collect and report the notifications sent by <code>nova</code>. Notifications are essentially the same as logs but can be much more detailed. A good overview of notifications can be found at <link href=\"https://wiki.openstack.org/wiki/SystemUsageData\" title=\"System Usage Data\">System Usage Data</link>.<indexterm class=\"singular\"><primary>StackTach</primary></indexterm><indexterm class=\"singular\"><primary>logging/monitoring</primary><secondary>StackTack tool</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:646(para)
msgid "To enable <code>nova</code> to send notifications, add the following to <filename>nova.conf</filename>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:652(para)
msgid "Once <code>nova</code> is sending notifications, install and configure StackTach. Since StackTach is relatively new and constantly changing, installation instructions would quickly become outdated. Please refer to the <link href=\"https://github.com/stackforge/stacktach\">StackTach GitHub repo</link> for instructions as well as a demo video. Additional details on the latest developments can be discovered at the <link href=\"http://stacktach.com/\">official page</link>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:662(title)
msgid "OpenStack Telemetry"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:664(para)
msgid "An integrated OpenStack project (code-named ceilometer) collects metering and event data relating to OpenStack services. Data collected by the Telemetry module could be used for billing. Depending on deployment configuration, collected data may be accessible to users based on the deployment configuration. The Telemetry service provides a REST API documented at <link href=\"http://developer.openstack.org/api-ref-telemetry-v2.html\"/>. You can read more about the module in the <link href=\"http://docs.openstack.org/admin-guide-cloud/content/ch_admin-openstack-telemetry.html\"> OpenStack Cloud Administrator Guide</link> or in the <link href=\"http://docs.openstack.org/developer/ceilometer\">developer documentation</link>.<indexterm class=\"singular\"><primary>monitoring</primary><secondary>metering and telemetry</secondary></indexterm><indexterm class=\"singular\"><primary>telemetry/metering</primary></indexterm><indexterm class=\"singular\"><primary>metering/telemetry</primary></indexterm><indexterm class=\"singular\"><primary>ceilometer</primary></indexterm><indexterm class=\"singular\"><primary>logging/monitoring</primary><secondary>ceilometer project</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:691(title)
msgid "OpenStack-Specific Resources"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:693(para)
msgid "Resources such as memory, disk, and CPU are generic resources that all servers (even non-OpenStack servers) have and are important to the overall health of the server. When dealing with OpenStack specifically, these resources are important for a second reason: ensuring that enough are available to launch instances. There are a few ways you can see OpenStack resource usage.<indexterm class=\"singular\"><primary>monitoring</primary><secondary>OpenStack-specific resources</secondary></indexterm><indexterm class=\"singular\"><primary>resources</primary><secondary>generic vs. OpenStack-specific</secondary></indexterm><indexterm class=\"singular\"><primary>logging/monitoring</primary><secondary>OpenStack-specific resources</secondary></indexterm> The first is through the <code>nova</code> command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:715(para)
msgid "This command displays a list of how many instances a tenant has running and some light usage statistics about the combined instances. This command is useful for a quick overview of your cloud, but it doesn't really get into a lot of details."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:720(para)
msgid "Next, the <code>nova</code> database contains three tables that store usage information."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:723(para)
msgid "The <code>nova.quotas</code> and <code>nova.quota_usages</code> tables store quota information. If a tenant's quota is different from the default quota settings, its quota is stored in the <phrase role=\"keep-together\"><code>nova.quotas</code></phrase> table. For example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:744(para)
msgid "The <code>nova.quota_usages</code> table keeps track of how many resources the tenant currently has in use:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:760(para)
msgid "By comparing a tenant's hard limit with their current resource usage, you can see their usage percentage. For example, if this tenant is using 1 floating IP out of 10, then they are using 10 percent of their floating IP quota. Rather than doing the calculation manually, you can use SQL or the scripting language of your choice and create a formatted report:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:789(para)
msgid "The preceding information was generated by using a custom script that can be found on <link href=\"https://github.com/cybera/novac/blob/dev/libexec/novac-quota-report\">GitHub</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:794(para)
msgid "This script is specific to a certain OpenStack installation and must be modified to fit your environment. However, the logic should easily be transferable."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:801(title)
msgid "Intelligent Alerting"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:803(para)
msgid "Intelligent alerting can be thought of as a form of continuous integration for operations. For example, you can easily check to see whether the Image service is up and running by ensuring that the <code>glance-api</code> and <code>glance-registry</code> processes are running or by seeing whether <code>glace-api</code> is responding on port 9292.<indexterm class=\"singular\"><primary>monitoring</primary><secondary>intelligent alerting</secondary></indexterm><indexterm class=\"singular\"><primary>alerts</primary><secondary>intelligent</secondary><seealso>logging/monitoring</seealso></indexterm><indexterm class=\"singular\"><primary>intelligent alerting</primary></indexterm><indexterm class=\"singular\"><primary>logging/monitoring</primary><secondary>intelligent alerting</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:826(para)
msgid "But how can you tell whether images are being successfully uploaded to the Image service? Maybe the disk that Image service is storing the images on is full or the S3 backend is down. You could naturally check this by doing a quick image upload:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:846(para)
msgid "By taking this script and rolling it into an alert for your monitoring system (such as Nagios), you now have an automated way of ensuring that image uploads to the Image Catalog are working."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:851(para)
msgid "You must remove the image after each test. Even better, test whether you can successfully delete an image from the Image Service."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:856(para)
msgid "Intelligent alerting takes considerably more time to plan and implement than the other alerts described in this chapter. A good outline to implement intelligent alerting is:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:862(para)
msgid "Review common actions in your cloud."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:866(para)
msgid "Create ways to automatically test these actions."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:870(para)
msgid "Roll these tests into an alerting system."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:874(para)
msgid "Some other examples for Intelligent Alerting include:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:878(para)
msgid "Can instances launch and be destroyed?"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:882(para)
msgid "Can users be created?"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:886(para)
msgid "Can objects be stored and deleted?"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:890(para)
msgid "Can volumes be created and destroyed?"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:896(title)
msgid "Trending"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:898(para)
msgid "Trending can give you great insight into how your cloud is performing day to day. You can learn, for example, if a busy day was simply a rare occurrence or if you should start adding new compute nodes.<indexterm class=\"singular\"><primary>monitoring</primary><secondary>trending</secondary><seealso>logging/monitoring</seealso></indexterm><indexterm class=\"singular\"><primary>trending</primary><secondary>monitoring cloud performance with</secondary></indexterm><indexterm class=\"singular\"><primary>logging/monitoring</primary><secondary>trending</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:917(para)
msgid "Trending takes a slightly different approach than alerting. While alerting is interested in a binary result (whether a check succeeds or fails), trending records the current state of something at a certain point in time. Once enough points in time have been recorded, you can see how the value has changed over time.<indexterm class=\"singular\"><primary>trending</primary><secondary>vs. alerts</secondary></indexterm><indexterm class=\"singular\"><primary>binary</primary><secondary>binary results in trending</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:931(para)
msgid "All of the alert types mentioned earlier can also be used for trend reporting. Some other trend examples include:<indexterm class=\"singular\"><primary>trending</primary><secondary>report examples</secondary></indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:941(para)
msgid "The number of instances on each compute node"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:945(para)
msgid "The types of flavors in use"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:949(para)
msgid "The number of volumes in use"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:953(para)
msgid "The number of Object Storage requests each hour"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:957(para)
msgid "The number of <literal>nova-api</literal> requests each hour"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:962(para)
msgid "The I/O statistics of your storage services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:966(para)
msgid "As an example, recording <code>nova-api</code> usage can allow you to track the need to scale your cloud controller. By keeping an eye on <code>nova-api</code> requests, you can determine whether you need to spawn more <literal>nova-api</literal> processes or go as far as introducing an entirely new server to run <code>nova-api</code>. To get an approximate count of the requests, look for standard INFO messages in <code>/var/log/nova/nova-api.log</code>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:976(para)
msgid "You can obtain further statistics by looking for the number of successful requests:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:981(para)
msgid "By running this command periodically and keeping a record of the result, you can create a trending report over time that shows whether your <code>nova-api</code> usage is increasing, decreasing, or keeping steady."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:986(para)
msgid "A tool such as collectd can be used to store this information. While collectd is out of the scope of this book, a good starting point would be to use collectd to store the result as a COUNTER data type. More information can be found in <link href=\"https://collectd.org/wiki/index.php/Data_source\">collectd's documentation</link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:998(para)
msgid "For stable operations, you want to detect failure promptly and determine causes efficiently. With a distributed system, it's even more important to track the right items to meet a service-level target. Learning where these logs are located in the file system or API gives you an advantage. This chapter also showed how to read, interpret, and manipulate information from OpenStack services so that you can monitor effectively."
msgstr ""
#. Put one translator per line, in the form of NAME <EMAIL>, YEAR1, YEAR2
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:0(None)
msgid "translator-credits"
msgstr ""