]> Upgrades With the exception of Object Storage, upgrading from one version of OpenStack to another can take a great deal of effort. Until the situation improves, this chapter provides some guidance on the operational aspects that you should consider for performing an upgrade based on detailed steps for a basic architecture.
Pre-upgrade Testing Environment Probably the most important step of all is the pre-upgrade testing. Especially if you are upgrading immediately after release of a new version, undiscovered bugs might hinder your progress. Some deployers prefer to wait until the first point release is announced. However, if you have a significant deployment, you might follow the development and testing of the release, thereby ensuring that bugs for your use cases are fixed. Each OpenStack cloud is different and as a result, even with what may seem a near-identical architecture to this guide, you must still test upgrades between versions in your environment. For this, you will need an approximate clone of your environment. However, that is not to say that it needs to be the same size, or using identical hardware as the production environment — few of us have that luxury. It is important to consider the hardware, and scale of the cloud you are upgrading, but here are some tips to avoid that incredible cost: Use your own cloud. The simplest place to start testing the next version of OpenStack is by setting up a new environment inside your own cloud. This may seem odd - especially the double virtualisation used in running compute nodes - but it's a sure way to very quickly test your configuration. Use a public cloud. Especially as your own cloud is unlikely to have sufficient space to scale test to the level of the entire cloud, consider using a public cloud to test the scalability limits of your cloud controller configuration. Most public clouds bill by the hour, which means it can be inexpensive to perform even a test with many nodes. Make another storage endpoint on the same system. If you use an external storage plugin or shared filesystem with your cloud, in many cases it's possible to test that it works by creating a second share or endpoint. This will enable you to test the system before entrusting the new version onto your storage. Watch the network. Even at smaller scale testing, it should be possible to determine if something is going horribly wrong in inter-component communication if you look at the network packets and see too many. To actually set up the test environment, there are several methods. Some prefer to do a full manual install using the OpenStack Installation Guides, and then see what the final configuration files look like, and which packages were installed. Others prefer to create a clone of their Automated Configuration infrastructure with changed package repository URLs and then alter the configuration until it starts working. Either approach is valid, and depends on experience. An upgrade pre-testing system is excellent for getting the configuration to work, however it is important to note that the historical use of the system and differences in user interaction can affect the successfulness of upgrades, too. We've seen experiences where database migrations encountered a bug (later fixed!) due to slight table differences between fresh Grizzly installs and those which migrated from Folsom to Grizzly. As artificial scale testing can only go so far, once upgraded, you'll also need to pay careful attention to the performance aspects of your cloud.
Preparing for a Roll Back Like all major system upgrades, your upgrade could fail for one or more difficult-to-determine reasons. You should prepare for this situation by leaving the ability to roll back your environment to the previous release including databases, configuration files, and packages. We provide an example process for rolling back your environment in .
Upgrades The upgrade process generally follows these steps: Perform some 'cleaning' of the environment prior to starting the upgrade process to ensure a consistent state. For example, instances not fully purged from the system after deletion may cause indeterminate behavior. Read the release notes and documentation. Find incompatibilities between your versions. Develop an upgrade procedure and assess it thoroughly using a test environment similar to your production environment. Run the upgrade procedure on the production environment. You can perform an upgrade with operational instances, but this strategy can be dangerous. You might consider using live migration to temporarily relocate instances to other compute nodes while performing upgrades. However, you must ensure database consistency throughout the process otherwise your environment may become unstable. Also, don't forget to provide sufficient notice to your users including giving them plenty of time to perform their own backups. The following order for service upgrades seems the most successful: Upgrade the OpenStack Identity Service (keystone). Upgrade the OpenStack Image Service (glance). Upgrade OpenStack Compute (nova) including networking components. Upgrade OpenStack Block Storage (cinder). Upgrade the OpenStack dashboard. The general upgrade process includes the following steps: Create a backup of configuration files and databases. Update the configuration files according to the release notes. Upgrade the packages using your distribution's package manager. Stop services, update database schemas, and restart services. Verify proper operation of your environment.
How to Perform an Upgrade from Grizzly to Havana - Ubuntu For this section, we assume that you are starting with the architecture provided in the OpenStack Installation Guide and upgrading to the same architecture for Havana. All nodes should run Ubuntu 12.04 LTS. This section primarily addresses upgrading core OpenStack services such as the Identity Service (keystone), Image Service (glance), Compute (nova) including networking, Block Storage (cinder), and the dashboard.
Impact on Users The upgrade process will interrupt management of your environment including the dashboard. If you properly prepare for this upgrade, tenant instances will continue to operate normally.
Upgrade Considerations Always review the release notes before performing an upgrade to learn about newly available features that you may want to enable and deprecated features that you should disable.
Perform a Backup Save the configuration files on all nodes. # for i in keystone glance nova cinder openstack-dashboard > do mkdir $i-grizzly > done # for i in keystone glance nova cinder openstack-dashboard > do cp -r /etc/$i/* $i-grizzly/ > done You can modify this example script on each node to handle different services. Back up all databases on the controller. # mysqldump -u root -p --opt --add-drop-database --all-databases > grizzly-db-backup.sql
Manage Repositories On all nodes, remove the repository for Grizzly packages and add the repository for Havana packages. # apt-add-repository -r cloud-archive:grizzly # apt-add-repository cloud-archive:havana Make sure any automatic updates are disabled.
Update Configuration Files Update the Glance configuration on the controller node for compatibility with Havana. If not currently present and configured as follows, add or modify the following keys in /etc/glance/glance-api.conf and /etc/glance/glance-registry.conf. [keystone_authtoken] auth_uri = http://controller:5000 auth_host = controller admin_tenant_name = service admin_user = glance admin_password = GLANCE_PASS [paste_deploy] flavor = keystone If currently present, remove the following key from the [filter:authtoken] section in /etc/glance/glance-api-paste.conf and /etc/glance/glance-registry-paste.conf. [filter:authtoken] flavor = keystone Update the Nova configuration on all nodes for compatibility with Havana. Add the new [database] section and associated key to /etc/nova/nova.conf. [database] connection = mysql://nova:NOVA_DBPASS@controller/nova Remove defunct configuration from the [DEFAULT] section in /etc/nova/nova.conf. [DEFAULT] sql_connection = mysql://nova:NOVA_DBPASS@controller/nova If not already present and configured as follows, add or modify the following keys in /etc/nova/nova.conf. [keystone_authtoken] auth_uri = http://controller:5000/v2.0 auth_host = controller auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = NOVA_PASS On all compute nodes, increase the DHCP lease time (measured in seconds) in /etc/nova/nova.conf to enable currently active instances to continue leasing their IP addresses during the upgrade process. Setting this value too high may cause more dynamic environments to run out of available IP addresses. Use an appropriate value for your environment. [DEFAULT] dhcp_lease_time = 86400 You must restart Dnsmasq and the networking component of Compute to enable the new DHCP lease time. # pkill -9 dnsmasq # service nova-network restart Update the Cinder configuration on the controller and storage nodes for compatibility with Havana. Add the new [database] section and associated key to /etc/cinder/cinder.conf. [database] connection = mysql://cinder:CINDER_DBPASS@controller/cinder Remove defunct configuration from the [DEFAULT] section in /etc/cinder/cinder.conf. [DEFAULT] sql_connection = mysql://cinder:CINDER_DBPASS@controller/cinder If not currently present and configured as follows, add or modify the following key in /etc/cinder/cinder.conf. [keystone_authtoken] auth_uri = http://controller:5000 Update the dashboard configuration on the controller node for compatibility with Havana. The dashboard installation procedure and configuration file changed substantially between Grizzly and Havana. Particularly, if you are running Django 1.5 or later, you must ensure that /etc/openstack-dashboard/local_settings contains a correctly configured ALLOWED_HOSTS key which contains a list of hostnames recognized by the dashboard. If users will access your dashboard using "http://dashboard.example.com", you would set: ALLOWED_HOSTS=['dashboard.example.com'] If users will access your dashboard on the local system, you would set: ALLOWED_HOSTS=['localhost'] If users will access your dashboard using an IP address in addition to a hostname, you would set: ALLOWED_HOSTS=['dashboard.example.com', '192.168.122.200']
Upgrade Packages on the Controller Node Upgrade packages on the controller node to Havana. Depending on your specific configuration, performing a dist-upgrade may restart services supplemental to your OpenStack environment. For example, if you use Open-iSCSI for Block Storage volumes and the upgrade includes a new open-scsi package, the package manager will restart Open-iSCSI services which may cause disconnection of volumes for your users. # apt-get update # apt-get dist-upgrade The package manager will ask you about updating various configuration files. We recommend denying these changes. The package manager will append .dpkg-dist to the end of newer versions of existing configuration files. You should consider adopting conventions associated with the newer configuration files and merging them with your existing configuration files after completing the upgrade process.
Stop Services, Update Database Schemas, and Restart Services on the Controller Node Stop each service, run the database synchronization command if necessary to update the associated database schema, and restart each service to apply the new configuration. Some services require additional commands. # service keystone stop # keystone-manage token_flush # keystone-manage db_sync # service keystone start # service glance-api stop # service glance-registry stop # glance-manage db_sync # service glance-api start # service glance-registry start # service nova-api restart # service nova-scheduler restart # service nova-conductor restart # service nova-cert restart # service nova-consoleauth restart # service nova-novncproxy restart # service cinder-api stop # service cinder-scheduler stop # cinder-manage db sync # service cinder-api start # service cinder-scheduler start The Compute services only need restarting because the package manager handles database synchronization. The controller node update is complete. Now you can upgrade the compute nodes.
Upgrade Packages and Restart Services on the Compute Nodes Upgrade packages on the compute nodes to Havana. Make sure you have removed the repository for Grizzly packages and added the repository for Havana packages. # apt-get update # apt-get dist-upgrade Due to a packaging issue, this command may fail with the following error: Errors were encountered while processing: /var/cache/apt/archives/qemu-utils_1.5.0+dfsg-3ubuntu5~cloud0_amd64.deb /var/cache/apt/archives/qemu-system-common_1.5.0+dfsg-3ubuntu5~cloud0_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) You can fix this issue using the following command. # apt-get -f install The packaging system will ask about updating the /etc/nova/api-paste.ini file. Like the controller upgrade, we recommend denying these changes and reviewing the .dpkg-dist file after completing the upgrade process. Restart Compute services. # service nova-compute restart # service nova-network restart # service nova-api-metadata restart
Upgrade Packages and Restart Services on the Block Storage Nodes Upgrade packages on the storage nodes to Havana. Make sure you have removed the repository for Grizzly packages and added the repository for Havana packages. # apt-get update # apt-get dist-upgrade The packaging system will ask about updating the /etc/cinder/api-paste.ini file. Like the controller upgrade, we recommend denying these changes and reviewing the .dpkg-dist file after completing the upgrade process. Restart Block Storage services. # service cinder-volume restart
How to Perform an Upgrade from Grizzly to Havana - Red Hat Enterprise Linux and Derivatives For this section, we assume that you are starting with the architecture provided in the OpenStack Installation Guide and upgrading to the same architecture for Havana. All nodes should run Red Hat Enterprise Linux 6.4 or compatible derivatives. Newer minor releases should also work. This section primarily addresses upgrading core OpenStack services such as the Identity Service (keystone), Image Service (glance), Compute (nova) including networking, Block Storage (cinder), and the dashboard.
Impact on Users The upgrade process will interrupt management of your environment including the dashboard. If you properly prepare for this upgrade, tenant instances will continue to operate normally.
Upgrade Considerations Always review the release notes before performing an upgrade to learn about newly available features that you may want to enable and deprecated features that you should disable.
Perform a Backup Save the configuration files on all nodes. # for i in keystone glance nova cinder openstack-dashboard > do mkdir $i-grizzly > done # for i in keystone glance nova cinder openstack-dashboard > do cp -r /etc/$i/* $i-grizzly/ > done You can modify this example script on each node to handle different services. Back up all databases on the controller. # mysqldump -u root -p --opt --add-drop-database --all-databases > grizzly-db-backup.sql
Manage Repositories On all nodes, remove the repository for Grizzly packages and add the repository for Havana packages. # yum erase rdo-release-grizzly # yum install http://repos.fedorapeople.org/repos/openstack/openstack-havana/rdo-release-havana-7.noarch.rpm Make sure any automatic updates are disabled. Consider checking for newer versions of the Havana repository.
Update Configuration Files Update the Glance configuration on the controller node for compatibility with Havana. If not currently present and configured as follows, add or modify the following keys in /etc/glance/glance-api.conf and /etc/glance/glance-registry.conf. # openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ auth_uri http://controller:5000 # openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ auth_host controller # openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ admin_tenant_name service # openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ admin_user glance # openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ admin_password GLANCE_PASS # openstack-config --set /etc/glance/glance-api.conf paste_deploy \ flavor keystone # openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ auth_uri http://controller:5000 # openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ auth_host controller # openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ admin_tenant_name service # openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ admin_user glance # openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ admin_password GLANCE_PASS # openstack-config --set /etc/glance/glance-registry.conf paste_deploy \ flavor keystone If currently present, remove the following key from the [filter:authtoken] section in /etc/glance/glance-api-paste.ini and /etc/glance/glance-registry-paste.ini. [filter:authtoken] flavor = keystone Update the Nova configuration on all nodes for compatibility with Havana. Add the new [database] section and associated key to /etc/nova/nova.conf. # openstack-config --set /etc/nova/nova.conf database \ connection mysql://nova:NOVA_DBPASS@controller/nova Remove defunct database configuration from /etc/nova/nova.conf. # openstack-config --del /etc/nova/nova.conf DEFAULT sql_connection If not already present and configured as follows, add or modify the following keys in /etc/nova/nova.conf. # openstack-config --set /etc/nova/nova.conf keystone_authtoken \ auth_uri http://controller:5000/v2.0 # openstack-config --set /etc/nova/nova.conf keystone_authtoken \ auth_host controller # openstack-config --set /etc/nova/nova.conf keystone_authtoken \ admin_tenant_name service # openstack-config --set /etc/nova/nova.conf keystone_authtoken \ admin_user nova # openstack-config --set /etc/nova/nova.conf keystone_authtoken \ admin_password NOVA_PASS On all compute nodes, increase the DHCP lease time (measured in seconds) in /etc/nova/nova.conf to enable currently active instances to continue leasing their IP addresses during the upgrade process. Setting this value too high may cause more dynamic environments to run out of available IP addresses. Use an appropriate value for your environment. # openstack-config --set /etc/nova/nova.conf DEFAULT \ dhcp_lease_time 86400 You must restart Dnsmasq and the Nova networking service to enable the new DHCP lease time. # pkill -9 dnsmasq # service openstack-nova-network restart Update the Cinder configuration on the controller and storage nodes for compatibility with Havana. Add the new [database] section and associated key to /etc/cinder/cinder.conf. # openstack-config --set /etc/cinder/cinder.conf database \ connection mysql://cinder:CINDER_DBPASS@controller/cinder Remove defunct database configuration from /etc/cinder/cinder.conf. # openstack-config --del /etc/cinder/cinder.conf DEFAULT sql_connection If not currently present and configured as follows, add or modify the following key in /etc/cinder/cinder.conf. # openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \ auth_uri http://controller:5000 Update the dashboard configuration on the controller node for compatibility with Havana. The dashboard installation procedure and configuration file changed substantially between Grizzly and Havana. Particularly, if you are running Django 1.5 or later, you must ensure that /etc/openstack-dashboard/local_settings contains a correctly configured ALLOWED_HOSTS key which contains a list of hostnames recognized by the dashboard. If users will access your dashboard using "http://dashboard.example.com", you would set: ALLOWED_HOSTS=['dashboard.example.com'] If users will access your dashboard on the local system, you would set: ALLOWED_HOSTS=['localhost'] If users will access your dashboard using an IP address in addition to a hostname, you would set: ALLOWED_HOSTS=['dashboard.example.com', '192.168.122.200']
Upgrade Packages on the Controller Node Upgrade packages on the controller node to Havana. Some services may terminate with an error during the package upgrade process. If this may cause a problem with your environment, consider stopping all services before upgrading them to Havana. # yum upgrade The package manager will append .rpmnew to the end of newer versions of existing configuration files. You should consider adopting conventions associated with the newer configuration files and merging them with your existing configuration files after completing the upgrade process. Install the OpenStack SELinux package on the controller node. # yum install openstack-selinux
Stop Services, Update Database Schemas, and Restart Services on the Controller Node Stop each service, run the database synchronization command if necessary to update the associated database schema, and restart each service to apply the new configuration. Some services require additional commands. # service openstack-keystone stop # keystone-manage token_flush # keystone-manage db_sync # service openstack-keystone start # service openstack-glance-api stop # service openstack-glance-registry stop # glance-manage db_sync # service openstack-glance-api start # service openstack-glance-registry start # service openstack-nova-api stop # service openstack-nova-scheduler stop # service openstack-nova-conductor stop # service openstack-nova-cert stop # service openstack-nova-consoleauth stop # service openstack-nova-novncproxy stop # nova-manage db sync # service openstack-nova-api start # service openstack-nova-scheduler start # service openstack-nova-conductor start # service openstack-nova-cert start # service openstack-nova-consoleauth start # service openstack-nova-novncproxy start # service openstack-cinder-api stop # service openstack-cinder-scheduler stop # cinder-manage db sync # service openstack-cinder-api start # service openstack-cinder-scheduler start The controller node update is complete. Now you can upgrade the compute nodes.
Upgrade Packages and Restart Services on the Compute Nodes Upgrade packages on the compute nodes to Havana. Make sure you have removed the repository for Grizzly packages and added the repository for Havana packages. # yum upgrade Install the OpenStack SELinux package on the compute nodes. # yum install openstack-selinux Restart Compute services. # service openstack-nova-compute restart # service openstack-nova-network restart # service openstack-nova-metadata-api restart
Upgrade Packages and Restart Services on the Block Storage Nodes Upgrade packages on the storage nodes to Havana. Make sure you have removed the repository for Grizzly packages and added the repository for Havana packages. # yum upgrade Install the OpenStack SELinux package on the storage nodes. # yum install openstack-selinux Restart Block Storage services. # service openstack-cinder-volume restart
Cleaning Up and Final Configuration File Updates On all distributions, you will need to perform some final tasks to complete the upgrade process. Decrease DHCP timeouts by modifying /etc/nova/nova.conf on the compute nodes back to the original value for your environment. Update all of the .ini files to match passwords and pipelines as required for Havana in your environment. After a migration, your users will see different results from nova image-list and glance image-list unless you match up policies for access to private images. To do so, edit /etc/glance/policy.json and /etc/nova/policy.json to contain "context_is_admin": "role:admin", which limits access to private images for projects. Thoroughly test the environment and then let your users know that their cloud is running normally again.
Rolling Back a Failed Upgrade While we do not wish this fate upon anyone, upgrades involve complex operations and can fail. This section provides guidance for rolling back to a previous release of OpenStack. Although only tested on Ubuntu, other distributions follow a similar procedure. In this section, we only consider the most immediate case: You have taken down production management services in preparation for an upgrade, completed part of the upgrade process, discovered one or more problems not encountered during testing, and need to roll back your environment to the original "known good" state. We specifically assume that you did not make any state changes after attempting the upgrade process: No new instances, networks, storage volumes, etc. Within this scope, you need to accomplish three main steps to successfully roll back your environment: Roll back configuration files Roll back databases Roll back packages The upgrade instructions provided in earlier sections ensure that you have proper backups of your databases and configuration files. You should read through this section carefully and verify that you have the requisite backups to restore. Rolling back upgrades is a tricky process as distributions tend to put much more effort into testing upgrades than downgrades. Broken downgrades often take significantly more effort to troubleshoot and hopefully resolve than broken upgrades. Only you can weigh the risks of trying to push a failed upgrade forward versus rolling it back. Generally, we consider rolling back the very last option. The steps described below for Ubuntu have worked on at least one production environment, but may not work for all environments. Perform the Roll Back from Havana to Grizzly Stop all OpenStack services. Copy contents of configuration backup directories /etc/<service>.grizzly that you created during the upgrade process back to /etc/<service>. Restore databases from the backup file grizzly-db-backup.sql that you created with mysqldump during the upgrade process. # mysql -u root -p < grizzly-db-backup.sql If you created this backup using the --add-drop-database flag as instructed, you can proceed to the next step. If you omitted this flag, MySQL will revert all of the tables that existed in Grizzly, but not drop any tables created during the database migration for Havana. In this case, you will need to manually determine which tables should not exist and drop them to prevent issues with your next upgrade attempt. Downgrade OpenStack packages. We consider downgrading packages by far the most complicated step and highly dependent on the distribution as well as overall administration of the system. Determine the OpenStack packages installed on your system. This is done using dpkg --get-selections, filtering for OpenStack packages, filtering again to omit packages explicitly marked in the deinstall state, and saving the final output to a file. For example, the following command covers a controller node with keystone, glance, nova, neutron, and cinder: # dpkg --get-selections | grep -e keystone -e glance -e nova -e neutron -e cinder \ | grep -v deinstall | tee openstack-selections cinder-api install cinder-common install cinder-scheduler install cinder-volume install glance install glance-api install glance-common install glance-registry install neutron-common install neutron-dhcp-agent install neutron-l3-agent install neutron-lbaas-agent install neutron-metadata-agent install neutron-plugin-openvswitch install neutron-plugin-openvswitch-agent install neutron-server install nova-api install nova-cert install nova-common install nova-conductor install nova-consoleauth install nova-novncproxy install nova-objectstore install nova-scheduler install python-cinder install python-cinderclient install python-glance install python-glanceclient install python-keystone install python-keystoneclient install python-neutron install python-neutronclient install python-nova install python-novaclient install Depending on the type of server, the contents and order of your package list may vary from this example. You can determine the package versions available for reversion by using apt-cache policy. If you removed the Grizzly repositories, you must first reinstall them and run apt-get update. # apt-cache policy nova-common nova-common: Installed: 1:2013.2-0ubuntu1~cloud0 Candidate: 1:2013.2-0ubuntu1~cloud0 Version table: *** 1:2013.2-0ubuntu1~cloud0 0 500 http://ubuntu-cloud.archive.canonical.com/ubuntu/ precise-updates/havana/main amd64 Packages 100 /var/lib/dpkg/status 1:2013.1.4-0ubuntu1~cloud0 0 500 http://ubuntu-cloud.archive.canonical.com/ubuntu/ precise-updates/grizzly/main amd64 Packages 2012.1.3+stable-20130423-e52e6912-0ubuntu1.2 0 500 http://us.archive.ubuntu.com/ubuntu/ precise-updates/main amd64 Packages 500 http://security.ubuntu.com/ubuntu/ precise-security/main amd64 Packages 2012.1-0ubuntu2 0 500 http://us.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages This tells us the currently installed version of the package, newest candidate version, and all versions along with the repository that contains each version. Look for the appropriate Grizzly version, in this case 1:2013.1.4-0ubuntu1~cloud0. The process of manually picking through this list of packages is rather tedious and prone to errors. You should consider using the following script to help with this process: # for i in `cut -f 1 openstack-selections | sed 's/neutron/quantum/;'`; \ do echo -n $i ;apt-cache policy $i | grep -B 1 grizzly | grep -v Packages \ | awk '{print "="$1}';done | tr '\n' ' ' | tee openstack-grizzly-versions cinder-api=1:2013.1.4-0ubuntu1~cloud0 cinder-common=1:2013.1.4-0ubuntu1~cloud0 cinder-scheduler=1:2013.1.4-0ubuntu1~cloud0 cinder-volume=1:2013.1.4-0ubuntu1~cloud0 glance=1:2013.1.4-0ubuntu1~cloud0 glance-api=1:2013.1.4-0ubuntu1~cloud0 glance-common=1:2013.1.4-0ubuntu1~cloud0 glance-registry=1:2013.1.4-0ubuntu1~cloud0 quantum-common=1:2013.1.4-0ubuntu1~cloud0 quantum-dhcp-agent=1:2013.1.4-0ubuntu1~cloud0 quantum-l3-agent=1:2013.1.4-0ubuntu1~cloud0 quantum-lbaas-agent=1:2013.1.4-0ubuntu1~cloud0 quantum-metadata-agent=1:2013.1.4-0ubuntu1~cloud0 quantum-plugin-openvswitch=1:2013.1.4-0ubuntu1~cloud0 quantum-plugin-openvswitch-agent=1:2013.1.4-0ubuntu1~cloud0 quantum-server=1:2013.1.4-0ubuntu1~cloud0 nova-api=1:2013.1.4-0ubuntu1~cloud0 nova-cert=1:2013.1.4-0ubuntu1~cloud0 nova-common=1:2013.1.4-0ubuntu1~cloud0 nova-conductor=1:2013.1.4-0ubuntu1~cloud0 nova-consoleauth=1:2013.1.4-0ubuntu1~cloud0 nova-novncproxy=1:2013.1.4-0ubuntu1~cloud0 nova-objectstore=1:2013.1.4-0ubuntu1~cloud0 nova-scheduler=1:2013.1.4-0ubuntu1~cloud0 python-cinder=1:2013.1.4-0ubuntu1~cloud0 python-cinderclient=1:1.0.3-0ubuntu1~cloud0 python-glance=1:2013.1.4-0ubuntu1~cloud0 python-glanceclient=1:0.9.0-0ubuntu1.2~cloud0 python-quantum=1:2013.1.4-0ubuntu1~cloud0 python-quantumclient=1:2.2.0-0ubuntu1~cloud0 python-nova=1:2013.1.4-0ubuntu1~cloud0 python-novaclient=1:2.13.0-0ubuntu1~cloud0 If you decide to continue this step manually, don't forget to change neutron to quantum where applicable. Use apt-get install to install specific versions of each package by specifying <package-name>=<version>. The script in the previous step conveniently created a list of package=version pairs for you. # apt-get install `cat openstack-grizzly-versions` This completes the roll back procedure. You should remove the Havana repository and run apt-get update to prevent accidental upgrades until you solve whatever issue caused you to roll back your environment.