remove project-specific content from admin guide
Change-Id: Ibd852c3b7909e09af7a7c733d471929c1eb93863 Depends-On: Ia750cb049c0f53a234ea70ce1f2bbbb7a2aa9454 Signed-off-by: Doug Hellmann <doug@doughellmann.com>
This commit is contained in:
parent
64b3ce5f93
commit
c21f7bb13c
@ -1,93 +0,0 @@
|
||||
.. _baremetal_multitenancy:
|
||||
|
||||
========================================
|
||||
Use multitenancy with Bare Metal service
|
||||
========================================
|
||||
|
||||
Multitenancy allows creating a dedicated project network that extends the
|
||||
current Bare Metal (ironic) service capabilities of providing ``flat``
|
||||
networks. Multitenancy works in conjunction with Networking (neutron)
|
||||
service to allow provisioning of a bare metal server onto the project network.
|
||||
Therefore, multiple projects can get isolated instances after deployment.
|
||||
|
||||
Bare Metal service provides the ``local_link_connection`` information to the
|
||||
Networking service ML2 driver. The ML2 driver uses that information to plug the
|
||||
specified port to the project network.
|
||||
|
||||
.. list-table:: ``local_link_connection`` fields
|
||||
:header-rows: 1
|
||||
|
||||
* - Field
|
||||
- Description
|
||||
* - ``switch_id``
|
||||
- Required. Identifies a switch and can be an LLDP-based MAC address or
|
||||
an OpenFlow-based ``datapath_id``.
|
||||
* - ``port_id``
|
||||
- Required. Port ID on the switch, for example, Gig0/1.
|
||||
* - ``switch_info``
|
||||
- Optional. Used to distinguish different switch models or other
|
||||
vendor specific-identifier.
|
||||
|
||||
Configure Networking service ML2 driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To enable the Networking service ML2 driver, edit the
|
||||
``/etc/neutron/plugins/ml2/ml2_conf.ini`` file:
|
||||
|
||||
#. Add the name of your ML2 driver.
|
||||
#. Add the vendor ML2 plugin configuration options.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
# ...
|
||||
mechanism_drivers = my_mechanism_driver
|
||||
|
||||
[my_vendor]
|
||||
param_1 = ...
|
||||
param_2 = ...
|
||||
param_3 = ...
|
||||
|
||||
For more details, see
|
||||
`Networking service mechanism drivers <https://docs.openstack.org/ocata/networking-guide/config-ml2.html#mechanism-drivers>`__.
|
||||
|
||||
Configure Bare Metal service
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
After you configure the Networking service ML2 driver, configure Bare Metal
|
||||
service:
|
||||
|
||||
#. Edit the ``/etc/ironic/ironic.conf`` for the ``ironic-conductor`` service.
|
||||
Set the ``network_interface`` node field to a valid network driver that is
|
||||
used to switch, clean, and provision networks.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
enabled_network_interfaces=flat,neutron
|
||||
|
||||
[neutron]
|
||||
# ...
|
||||
cleaning_network_uuid=$UUID
|
||||
provisioning_network_uuid=$UUID
|
||||
|
||||
.. warning:: The ``cleaning_network_uuid`` and ``provisioning_network_uuid``
|
||||
parameters are required for the ``neutron`` network interface. If they are
|
||||
not set, ``ironic-conductor`` fails to start.
|
||||
|
||||
#. Set ``neutron`` to use Networking service ML2 driver:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ironic node-create -n $NAME --network-interface neutron --driver agent_ipmitool
|
||||
|
||||
#. Create a port with appropriate ``local_link_connection`` information. Set
|
||||
the ``pxe_enabled`` port attribute to ``True`` to create network ports for
|
||||
for the ``pxe_enabled`` ports only:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ironic --ironic-api-version latest port-create -a $HW_MAC_ADDRESS \
|
||||
-n $NODE_UUID -l switch_id=$SWITCH_MAC_ADDRESS \
|
||||
-l switch_info=$SWITCH_HOSTNAME -l port_id=$SWITCH_PORT --pxe-enabled true
|
@ -1,161 +0,0 @@
|
||||
.. _baremetal:
|
||||
|
||||
==========
|
||||
Bare Metal
|
||||
==========
|
||||
|
||||
The Bare Metal service provides physical hardware management features.
|
||||
|
||||
Introduction
|
||||
~~~~~~~~~~~~
|
||||
|
||||
The Bare Metal service provides physical hardware as opposed to
|
||||
virtual machines. It also provides several reference drivers, which
|
||||
leverage common technologies like PXE and IPMI, to cover a wide range
|
||||
of hardware. The pluggable driver architecture also allows
|
||||
vendor-specific drivers to be added for improved performance or
|
||||
functionality not provided by reference drivers. The Bare Metal
|
||||
service makes physical servers as easy to provision as virtual
|
||||
machines in a cloud, which in turn will open up new avenues for
|
||||
enterprises and service providers.
|
||||
|
||||
System architecture
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Bare Metal service is composed of the following components:
|
||||
|
||||
#. An admin-only RESTful API service, by which privileged users, such
|
||||
as operators and other services within the cloud control
|
||||
plane, may interact with the managed bare-metal servers.
|
||||
|
||||
#. A conductor service, which conducts all activity related to
|
||||
bare-metal deployments. Functionality is exposed via the API
|
||||
service. The Bare Metal service conductor and API service
|
||||
communicate via RPC.
|
||||
|
||||
#. Various drivers that support heterogeneous hardware, which enable
|
||||
features specific to unique hardware platforms and leverage
|
||||
divergent capabilities via a common API.
|
||||
|
||||
#. A message queue, which is a central hub for passing messages, such
|
||||
as RabbitMQ. It should use the same implementation as that of the
|
||||
Compute service.
|
||||
|
||||
#. A database for storing information about the resources. Among other
|
||||
things, this includes the state of the conductors, nodes (physical
|
||||
servers), and drivers.
|
||||
|
||||
When a user requests to boot an instance, the request is passed to the
|
||||
Compute service via the Compute service API and scheduler. The Compute
|
||||
service hands over this request to the Bare Metal service, where the
|
||||
request passes from the Bare Metal service API, to the conductor which
|
||||
will invoke a driver to successfully provision a physical server for
|
||||
the user.
|
||||
|
||||
Bare Metal deployment
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. PXE deploy process
|
||||
|
||||
#. Agent deploy process
|
||||
|
||||
.. TODO Add the detail about the process of Bare Metal deployment.
|
||||
|
||||
Use Bare Metal
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
#. Install the Bare Metal service.
|
||||
|
||||
#. Setup the Bare Metal driver in the compute node's ``nova.conf`` file.
|
||||
|
||||
#. Setup TFTP folder and prepare PXE boot loader file.
|
||||
|
||||
#. Prepare the bare metal flavor.
|
||||
|
||||
#. Register the nodes with correct drivers.
|
||||
|
||||
#. Configure the driver information.
|
||||
|
||||
#. Register the ports information.
|
||||
|
||||
#. Use the :command:`openstack server create` command to
|
||||
kick off the bare metal provision.
|
||||
|
||||
#. Check nodes' provision state and power state.
|
||||
|
||||
.. TODO Add the detail command line later on.
|
||||
|
||||
Use multitenancy with Bare Metal service
|
||||
----------------------------------------
|
||||
|
||||
.. toctree::
|
||||
|
||||
baremetal-multitenancy.rst
|
||||
|
||||
.. TODO Add guides for other features.
|
||||
|
||||
Troubleshooting
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
No valid host found error
|
||||
-------------------------
|
||||
|
||||
Problem
|
||||
-------
|
||||
|
||||
Sometimes ``/var/log/nova/nova-conductor.log`` contains the following error:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
NoValidHost: No valid host was found. There are not enough hosts available.
|
||||
|
||||
The message ``No valid host was found`` means that the Compute service
|
||||
scheduler could not find a bare metal node suitable for booting the new
|
||||
instance.
|
||||
|
||||
This means there will be some mismatch between resources that the Compute
|
||||
service expects to find and resources that Bare Metal service advertised to
|
||||
the Compute service.
|
||||
|
||||
Solution
|
||||
--------
|
||||
|
||||
If you get this message, check the following:
|
||||
|
||||
#. Introspection should have succeeded for you before, or you should have
|
||||
entered the required bare-metal node properties manually.
|
||||
For each node in the :command:`ironic node-list` command, use:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ironic node-show <IRONIC-NODE-UUID>
|
||||
|
||||
and make sure that ``properties`` JSON field has valid values for keys
|
||||
``cpus``, ``cpu_arch``, ``memory_mb`` and ``local_gb``.
|
||||
|
||||
#. The flavor in the Compute service that you are using does not exceed the
|
||||
bare-metal node properties above for a required number of nodes. Use:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor show FLAVOR
|
||||
|
||||
#. Make sure that enough nodes are in ``available`` state according to the
|
||||
:command:`ironic node-list` command. Nodes in ``manageable`` state usually
|
||||
mean they have failed introspection.
|
||||
|
||||
#. Make sure nodes you are going to deploy to are not in maintenance mode.
|
||||
Use the :command:`ironic node-list` command to check. A node automatically
|
||||
going to maintenance mode usually means the incorrect credentials for
|
||||
this node. Check them and then remove maintenance mode:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ironic node-set-maintenance <IRONIC-NODE-UUID> off
|
||||
|
||||
#. It takes some time for nodes information to propagate from the Bare Metal
|
||||
service to the Compute service after introspection. Our tooling usually
|
||||
accounts for it, but if you did some steps manually there may be a period
|
||||
of time when nodes are not available to the Compute service yet. Check that
|
||||
the :command:`openstack hypervisor stats show` command correctly shows total
|
||||
amount of resources in your system.
|
@ -1,34 +0,0 @@
|
||||
=============================================
|
||||
Increase Block Storage API service throughput
|
||||
=============================================
|
||||
|
||||
By default, the Block Storage API service runs in one process. This
|
||||
limits the number of API requests that the Block Storage service can
|
||||
process at any given time. In a production environment, you should
|
||||
increase the Block Storage API throughput by allowing the Block Storage
|
||||
API service to run in as many processes as the machine capacity allows.
|
||||
|
||||
.. note::
|
||||
|
||||
The Block Storage API service is named ``openstack-cinder-api`` on
|
||||
the following distributions: CentOS, Fedora, openSUSE, Red Hat
|
||||
Enterprise Linux, and SUSE Linux Enterprise. In Ubuntu and Debian
|
||||
distributions, the Block Storage API service is named ``cinder-api``.
|
||||
|
||||
To do so, use the Block Storage API service option ``osapi_volume_workers``.
|
||||
This option allows you to specify the number of API service workers
|
||||
(or OS processes) to launch for the Block Storage API service.
|
||||
|
||||
To configure this option, open the ``/etc/cinder/cinder.conf``
|
||||
configuration file and set the ``osapi_volume_workers`` configuration
|
||||
key to the number of CPU cores/threads on a machine.
|
||||
|
||||
On distributions that include ``openstack-config``, you can configure
|
||||
this by running the following command instead:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# openstack-config --set /etc/cinder/cinder.conf \
|
||||
DEFAULT osapi_volume_workers CORES
|
||||
|
||||
Replace ``CORES`` with the number of CPU cores/threads on a machine.
|
@ -1,266 +0,0 @@
|
||||
===================================
|
||||
Back up Block Storage service disks
|
||||
===================================
|
||||
|
||||
While you can use the LVM snapshot to create snapshots, you can also use
|
||||
it to back up your volumes. By using LVM snapshot, you reduce the size
|
||||
of the backup; only existing data is backed up instead of the entire
|
||||
volume.
|
||||
|
||||
To back up a volume, you must create a snapshot of it. An LVM snapshot
|
||||
is the exact copy of a logical volume, which contains data in a frozen
|
||||
state. This prevents data corruption because data cannot be manipulated
|
||||
during the volume creation process. Remember that the volumes created
|
||||
through an :command:`openstack volume create` command exist in an LVM
|
||||
logical volume.
|
||||
|
||||
You must also make sure that the operating system is not using the
|
||||
volume and that all data has been flushed on the guest file systems.
|
||||
This usually means that those file systems have to be unmounted during
|
||||
the snapshot creation. They can be mounted again as soon as the logical
|
||||
volume snapshot has been created.
|
||||
|
||||
Before you create the snapshot you must have enough space to save it.
|
||||
As a precaution, you should have at least twice as much space as the
|
||||
potential snapshot size. If insufficient space is available, the snapshot
|
||||
might become corrupted.
|
||||
|
||||
For this example assume that a 100 GB volume named ``volume-00000001``
|
||||
was created for an instance while only 4 GB are used. This example uses
|
||||
these commands to back up only those 4 GB:
|
||||
|
||||
* :command:`lvm2` command. Directly manipulates the volumes.
|
||||
|
||||
* :command:`kpartx` command. Discovers the partition table created inside the
|
||||
instance.
|
||||
|
||||
* :command:`tar` command. Creates a minimum-sized backup.
|
||||
|
||||
* :command:`sha1sum` command. Calculates the backup checksum to check its
|
||||
consistency.
|
||||
|
||||
You can apply this process to volumes of any size.
|
||||
|
||||
**To back up Block Storage service disks**
|
||||
|
||||
#. Create a snapshot of a used volume
|
||||
|
||||
* Use this command to list all volumes
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# lvdisplay
|
||||
|
||||
* Create the snapshot; you can do this while the volume is attached
|
||||
to an instance:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# lvcreate --size 10G --snapshot --name volume-00000001-snapshot \
|
||||
/dev/cinder-volumes/volume-00000001
|
||||
|
||||
Use the ``--snapshot`` configuration option to tell LVM that you want a
|
||||
snapshot of an already existing volume. The command includes the size
|
||||
of the space reserved for the snapshot volume, the name of the snapshot,
|
||||
and the path of an already existing volume. Generally, this path
|
||||
is ``/dev/cinder-volumes/VOLUME_NAME``.
|
||||
|
||||
The size does not have to be the same as the volume of the snapshot.
|
||||
The ``--size`` parameter defines the space that LVM reserves
|
||||
for the snapshot volume. As a precaution, the size should be the same
|
||||
as that of the original volume, even if the whole space is not
|
||||
currently used by the snapshot.
|
||||
|
||||
* Run the :command:`lvdisplay` command again to verify the snapshot:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
--- Logical volume ---
|
||||
LV Name /dev/cinder-volumes/volume-00000001
|
||||
VG Name cinder-volumes
|
||||
LV UUID gI8hta-p21U-IW2q-hRN1-nTzN-UC2G-dKbdKr
|
||||
LV Write Access read/write
|
||||
LV snapshot status source of
|
||||
/dev/cinder-volumes/volume-00000026-snap [active]
|
||||
LV Status available
|
||||
# open 1
|
||||
LV Size 15,00 GiB
|
||||
Current LE 3840
|
||||
Segments 1
|
||||
Allocation inherit
|
||||
Read ahead sectors auto
|
||||
- currently set to 256
|
||||
Block device 251:13
|
||||
|
||||
--- Logical volume ---
|
||||
LV Name /dev/cinder-volumes/volume-00000001-snap
|
||||
VG Name cinder-volumes
|
||||
LV UUID HlW3Ep-g5I8-KGQb-IRvi-IRYU-lIKe-wE9zYr
|
||||
LV Write Access read/write
|
||||
LV snapshot status active destination for /dev/cinder-volumes/volume-00000026
|
||||
LV Status available
|
||||
# open 0
|
||||
LV Size 15,00 GiB
|
||||
Current LE 3840
|
||||
COW-table size 10,00 GiB
|
||||
COW-table LE 2560
|
||||
Allocated to snapshot 0,00%
|
||||
Snapshot chunk size 4,00 KiB
|
||||
Segments 1
|
||||
Allocation inherit
|
||||
Read ahead sectors auto
|
||||
- currently set to 256
|
||||
Block device 251:14
|
||||
|
||||
#. Partition table discovery
|
||||
|
||||
* To exploit the snapshot with the :command:`tar` command, mount
|
||||
your partition on the Block Storage service server.
|
||||
|
||||
The :command:`kpartx` utility discovers and maps table partitions.
|
||||
You can use it to view partitions that are created inside the
|
||||
instance. Without using the partitions created inside instances,
|
||||
you cannot see its content and create efficient backups.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# kpartx -av /dev/cinder-volumes/volume-00000001-snapshot
|
||||
|
||||
.. note::
|
||||
|
||||
On a Debian-based distribution, you can use the
|
||||
:command:`apt-get install kpartx` command to install
|
||||
:command:`kpartx`.
|
||||
|
||||
If the tools successfully find and map the partition table,
|
||||
no errors are returned.
|
||||
|
||||
* To check the partition table map, run this command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ls /dev/mapper/nova*
|
||||
|
||||
You can see the ``cinder--volumes-volume--00000001--snapshot1``
|
||||
partition.
|
||||
|
||||
If you created more than one partition on that volume, you see
|
||||
several partitions; for example:
|
||||
``cinder--volumes-volume--00000001--snapshot2``,
|
||||
``cinder--volumes-volume--00000001--snapshot3``, and so on.
|
||||
|
||||
* Mount your partition
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# mount /dev/mapper/cinder--volumes-volume--volume--00000001--snapshot1 /mnt
|
||||
|
||||
If the partition mounts successfully, no errors are returned.
|
||||
|
||||
You can directly access the data inside the instance. If a message
|
||||
prompts you for a partition or you cannot mount it, determine whether
|
||||
enough space was allocated for the snapshot or the :command:`kpartx`
|
||||
command failed to discover the partition table.
|
||||
|
||||
Allocate more space to the snapshot and try the process again.
|
||||
|
||||
#. Use the :command:`tar` command to create archives
|
||||
|
||||
Create a backup of the volume:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ tar --exclude="lost+found" --exclude="some/data/to/exclude" -czf \
|
||||
volume-00000001.tar.gz -C /mnt/ /backup/destination
|
||||
|
||||
This command creates a ``tar.gz`` file that contains the data,
|
||||
*and data only*. This ensures that you do not waste space by backing
|
||||
up empty sectors.
|
||||
|
||||
#. Checksum calculation I
|
||||
|
||||
You should always have the checksum for your backup files. When you
|
||||
transfer the same file over the network, you can run a checksum
|
||||
calculation to ensure that your file was not corrupted during its
|
||||
transfer. The checksum is a unique ID for a file. If the checksums are
|
||||
different, the file is corrupted.
|
||||
|
||||
Run this command to run a checksum for your file and save the result
|
||||
to a file:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sha1sum volume-00000001.tar.gz > volume-00000001.checksum
|
||||
|
||||
.. note::
|
||||
|
||||
Use the :command:`sha1sum` command carefully because the time it
|
||||
takes to complete the calculation is directly proportional to the
|
||||
size of the file.
|
||||
|
||||
Depending on your CPU, the process might take a long time for
|
||||
files larger than around 4 to 6 GB.
|
||||
|
||||
#. After work cleaning
|
||||
|
||||
Now that you have an efficient and consistent backup, use this command
|
||||
to clean up the file system:
|
||||
|
||||
* Unmount the volume.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ umount /mnt
|
||||
|
||||
* Delete the partition table.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kpartx -dv /dev/cinder-volumes/volume-00000001-snapshot
|
||||
|
||||
* Remove the snapshot.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ lvremove -f /dev/cinder-volumes/volume-00000001-snapshot
|
||||
|
||||
Repeat these steps for all your volumes.
|
||||
|
||||
#. Automate your backups
|
||||
|
||||
Because more and more volumes might be allocated to your Block Storage
|
||||
service, you might want to automate your backups.
|
||||
The `SCR_5005_V01_NUAC-OPENSTACK-EBS-volumes-backup.sh`_ script assists
|
||||
you with this task. The script performs the operations from the previous
|
||||
example, but also provides a mail report and runs the backup based on
|
||||
the ``backups_retention_days`` setting.
|
||||
|
||||
Launch this script from the server that runs the Block Storage service.
|
||||
|
||||
This example shows a mail report:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Backup Start Time - 07/10 at 01:00:01
|
||||
Current retention - 7 days
|
||||
|
||||
The backup volume is mounted. Proceed...
|
||||
Removing old backups... : /BACKUPS/EBS-VOL/volume-00000019/volume-00000019_28_09_2011.tar.gz
|
||||
/BACKUPS/EBS-VOL/volume-00000019 - 0 h 1 m and 21 seconds. Size - 3,5G
|
||||
|
||||
The backup volume is mounted. Proceed...
|
||||
Removing old backups... : /BACKUPS/EBS-VOL/volume-0000001a/volume-0000001a_28_09_2011.tar.gz
|
||||
/BACKUPS/EBS-VOL/volume-0000001a - 0 h 4 m and 15 seconds. Size - 6,9G
|
||||
---------------------------------------
|
||||
Total backups size - 267G - Used space : 35%
|
||||
Total execution time - 1 h 75 m and 35 seconds
|
||||
|
||||
The script also enables you to SSH to your instances and run a
|
||||
:command:`mysqldump` command into them. To make this work, enable
|
||||
the connection to the Compute project keys. If you do not want to
|
||||
run the :command:`mysqldump` command, you can add
|
||||
``enable_mysql_dump=0`` to the script to turn off this functionality.
|
||||
|
||||
|
||||
.. Links
|
||||
.. _`SCR_5005_V01_NUAC-OPENSTACK-EBS-volumes-backup.sh`: https://github.com/Razique/BashStuff/blob/master/SYSTEMS/OpenStack/SCR_5005_V01_NUAC-OPENSTACK-EBS-volumes-backup.sh
|
@ -1,10 +0,0 @@
|
||||
================
|
||||
Boot from volume
|
||||
================
|
||||
|
||||
In some cases, you can store and run instances from inside volumes.
|
||||
For information, see the `Launch an instance from a volume`_ section
|
||||
in the `OpenStack End User Guide`_.
|
||||
|
||||
.. _`Launch an instance from a volume`: https://docs.openstack.org/user-guide/cli-nova-launch-instance-from-volume.html
|
||||
.. _`OpenStack End User Guide`: https://docs.openstack.org/user-guide/
|
@ -1,355 +0,0 @@
|
||||
==================
|
||||
Consistency groups
|
||||
==================
|
||||
|
||||
Consistency group support is available in OpenStack Block Storage. The
|
||||
support is added for creating snapshots of consistency groups. This
|
||||
feature leverages the storage level consistency technology. It allows
|
||||
snapshots of multiple volumes in the same consistency group to be taken
|
||||
at the same point-in-time to ensure data consistency. The consistency
|
||||
group operations can be performed using the Block Storage command line.
|
||||
|
||||
.. note::
|
||||
|
||||
Only Block Storage V2 API supports consistency groups. You can
|
||||
specify ``--os-volume-api-version 2`` when using Block Storage
|
||||
command line for consistency group operations.
|
||||
|
||||
Before using consistency groups, make sure the Block Storage driver that
|
||||
you are running has consistency group support by reading the Block
|
||||
Storage manual or consulting the driver maintainer. There are a small
|
||||
number of drivers that have implemented this feature. The default LVM
|
||||
driver does not support consistency groups yet because the consistency
|
||||
technology is not available at the storage level.
|
||||
|
||||
Before using consistency groups, you must change policies for the
|
||||
consistency group APIs in the ``/etc/cinder/policy.json`` file.
|
||||
By default, the consistency group APIs are disabled.
|
||||
Enable them before running consistency group operations.
|
||||
|
||||
Here are existing policy entries for consistency groups:
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"consistencygroup:create": "group:nobody"
|
||||
"consistencygroup:delete": "group:nobody",
|
||||
"consistencygroup:update": "group:nobody",
|
||||
"consistencygroup:get": "group:nobody",
|
||||
"consistencygroup:get_all": "group:nobody",
|
||||
"consistencygroup:create_cgsnapshot" : "group:nobody",
|
||||
"consistencygroup:delete_cgsnapshot": "group:nobody",
|
||||
"consistencygroup:get_cgsnapshot": "group:nobody",
|
||||
"consistencygroup:get_all_cgsnapshots": "group:nobody",
|
||||
}
|
||||
|
||||
Remove ``group:nobody`` to enable these APIs:
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"consistencygroup:create": "",
|
||||
"consistencygroup:delete": "",
|
||||
"consistencygroup:update": "",
|
||||
"consistencygroup:get": "",
|
||||
"consistencygroup:get_all": "",
|
||||
"consistencygroup:create_cgsnapshot" : "",
|
||||
"consistencygroup:delete_cgsnapshot": "",
|
||||
"consistencygroup:get_cgsnapshot": "",
|
||||
"consistencygroup:get_all_cgsnapshots": "",
|
||||
}
|
||||
|
||||
|
||||
Restart Block Storage API service after changing policies.
|
||||
|
||||
The following consistency group operations are supported:
|
||||
|
||||
- Create a consistency group, given volume types.
|
||||
|
||||
.. note::
|
||||
|
||||
A consistency group can support more than one volume type. The
|
||||
scheduler is responsible for finding a back end that can support
|
||||
all given volume types.
|
||||
|
||||
A consistency group can only contain volumes hosted by the same
|
||||
back end.
|
||||
|
||||
A consistency group is empty upon its creation. Volumes need to
|
||||
be created and added to it later.
|
||||
|
||||
- Show a consistency group.
|
||||
|
||||
- List consistency groups.
|
||||
|
||||
- Create a volume and add it to a consistency group, given volume type
|
||||
and consistency group id.
|
||||
|
||||
- Create a snapshot for a consistency group.
|
||||
|
||||
- Show a snapshot of a consistency group.
|
||||
|
||||
- List consistency group snapshots.
|
||||
|
||||
- Delete a snapshot of a consistency group.
|
||||
|
||||
- Delete a consistency group.
|
||||
|
||||
- Modify a consistency group.
|
||||
|
||||
- Create a consistency group from the snapshot of another consistency
|
||||
group.
|
||||
|
||||
- Create a consistency group from a source consistency group.
|
||||
|
||||
The following operations are not allowed if a volume is in a consistency
|
||||
group:
|
||||
|
||||
- Volume migration.
|
||||
|
||||
- Volume retype.
|
||||
|
||||
- Volume deletion.
|
||||
|
||||
.. note::
|
||||
|
||||
A consistency group has to be deleted as a whole with all the
|
||||
volumes.
|
||||
|
||||
The following operations are not allowed if a volume snapshot is in a
|
||||
consistency group snapshot:
|
||||
|
||||
- Volume snapshot deletion.
|
||||
|
||||
.. note::
|
||||
|
||||
A consistency group snapshot has to be deleted as a whole with
|
||||
all the volume snapshots.
|
||||
|
||||
The details of consistency group operations are shown in the following.
|
||||
|
||||
.. note::
|
||||
|
||||
Currently, no OpenStack client command is available to run in
|
||||
place of the cinder consistency group creation commands. Use the
|
||||
cinder commands detailed in the following examples.
|
||||
|
||||
**Create a consistency group**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cinder consisgroup-create
|
||||
[--name name]
|
||||
[--description description]
|
||||
[--availability-zone availability-zone]
|
||||
volume-types
|
||||
|
||||
.. note::
|
||||
|
||||
The parameter ``volume-types`` is required. It can be a list of
|
||||
names or UUIDs of volume types separated by commas without spaces in
|
||||
between. For example, ``volumetype1,volumetype2,volumetype3.``.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder consisgroup-create --name bronzeCG2 volume_type_1
|
||||
|
||||
+-------------------+--------------------------------------+
|
||||
| Property | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| availability_zone | nova |
|
||||
| created_at | 2014-12-29T12:59:08.000000 |
|
||||
| description | None |
|
||||
| id | 1de80c27-3b2f-47a6-91a7-e867cbe36462 |
|
||||
| name | bronzeCG2 |
|
||||
| status | creating |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
**Show a consistency group**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder consisgroup-show 1de80c27-3b2f-47a6-91a7-e867cbe36462
|
||||
|
||||
+-------------------+--------------------------------------+
|
||||
| Property | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| availability_zone | nova |
|
||||
| created_at | 2014-12-29T12:59:08.000000 |
|
||||
| description | None |
|
||||
| id | 2a6b2bda-1f43-42ce-9de8-249fa5cbae9a |
|
||||
| name | bronzeCG2 |
|
||||
| status | available |
|
||||
| volume_types | volume_type_1 |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
**List consistency groups**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder consisgroup-list
|
||||
|
||||
+--------------------------------------+-----------+-----------+
|
||||
| ID | Status | Name |
|
||||
+--------------------------------------+-----------+-----------+
|
||||
| 1de80c27-3b2f-47a6-91a7-e867cbe36462 | available | bronzeCG2 |
|
||||
| 3a2b3c42-b612-479a-91eb-1ed45b7f2ad5 | error | bronzeCG |
|
||||
+--------------------------------------+-----------+-----------+
|
||||
|
||||
**Create a volume and add it to a consistency group**:
|
||||
|
||||
.. note::
|
||||
|
||||
When creating a volume and adding it to a consistency group, a
|
||||
volume type and a consistency group id must be provided. This is
|
||||
because a consistency group can support more than one volume type.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume create --type volume_type_1 --consistency-group \
|
||||
1de80c27-3b2f-47a6-91a7-e867cbe36462 --size 1 cgBronzeVol
|
||||
|
||||
+---------------------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------------------+--------------------------------------+
|
||||
| attachments | [] |
|
||||
| availability_zone | nova |
|
||||
| bootable | false |
|
||||
| consistencygroup_id | 1de80c27-3b2f-47a6-91a7-e867cbe36462 |
|
||||
| created_at | 2014-12-29T13:16:47.000000 |
|
||||
| description | None |
|
||||
| encrypted | False |
|
||||
| id | 5e6d1386-4592-489f-a56b-9394a81145fe |
|
||||
| metadata | {} |
|
||||
| name | cgBronzeVol |
|
||||
| os-vol-host-attr:host | server-1@backend-1#pool-1 |
|
||||
| os-vol-mig-status-attr:migstat | None |
|
||||
| os-vol-mig-status-attr:name_id | None |
|
||||
| os-vol-tenant-attr:tenant_id | 1349b21da2a046d8aa5379f0ed447bed |
|
||||
| os-volume-replication:driver_data | None |
|
||||
| os-volume-replication:extended_status | None |
|
||||
| replication_status | disabled |
|
||||
| size | 1 |
|
||||
| snapshot_id | None |
|
||||
| source_volid | None |
|
||||
| status | creating |
|
||||
| user_id | 93bdea12d3e04c4b86f9a9f172359859 |
|
||||
| volume_type | volume_type_1 |
|
||||
+---------------------------------------+--------------------------------------+
|
||||
|
||||
**Create a snapshot for a consistency group**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder cgsnapshot-create 1de80c27-3b2f-47a6-91a7-e867cbe36462
|
||||
|
||||
+---------------------+--------------------------------------+
|
||||
| Property | Value |
|
||||
+---------------------+--------------------------------------+
|
||||
| consistencygroup_id | 1de80c27-3b2f-47a6-91a7-e867cbe36462 |
|
||||
| created_at | 2014-12-29T13:19:44.000000 |
|
||||
| description | None |
|
||||
| id | d4aff465-f50c-40b3-b088-83feb9b349e9 |
|
||||
| name | None |
|
||||
| status | creating |
|
||||
+---------------------+-------------------------------------+
|
||||
|
||||
**Show a snapshot of a consistency group**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder cgsnapshot-show d4aff465-f50c-40b3-b088-83feb9b349e9
|
||||
|
||||
**List consistency group snapshots**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder cgsnapshot-list
|
||||
|
||||
+--------------------------------------+--------+----------+
|
||||
| ID | Status | Name |
|
||||
+--------------------------------------+--------+----------+
|
||||
| 6d9dfb7d-079a-471e-b75a-6e9185ba0c38 | available | None |
|
||||
| aa129f4d-d37c-4b97-9e2d-7efffda29de0 | available | None |
|
||||
| bb5b5d82-f380-4a32-b469-3ba2e299712c | available | None |
|
||||
| d4aff465-f50c-40b3-b088-83feb9b349e9 | available | None |
|
||||
+--------------------------------------+--------+----------+
|
||||
|
||||
**Delete a snapshot of a consistency group**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder cgsnapshot-delete d4aff465-f50c-40b3-b088-83feb9b349e9
|
||||
|
||||
**Delete a consistency group**:
|
||||
|
||||
.. note::
|
||||
|
||||
The force flag is needed when there are volumes in the consistency
|
||||
group:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder consisgroup-delete --force 1de80c27-3b2f-47a6-91a7-e867cbe36462
|
||||
|
||||
**Modify a consistency group**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cinder consisgroup-update
|
||||
[--name NAME]
|
||||
[--description DESCRIPTION]
|
||||
[--add-volumes UUID1,UUID2,......]
|
||||
[--remove-volumes UUID3,UUID4,......]
|
||||
CG
|
||||
|
||||
The parameter ``CG`` is required. It can be a name or UUID of a consistency
|
||||
group. UUID1,UUID2,...... are UUIDs of one or more volumes to be added
|
||||
to the consistency group, separated by commas. Default is None.
|
||||
UUID3,UUID4,...... are UUIDs of one or more volumes to be removed from
|
||||
the consistency group, separated by commas. Default is None.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder consisgroup-update --name 'new name' \
|
||||
--description 'new description' \
|
||||
--add-volumes 0b3923f5-95a4-4596-a536-914c2c84e2db,1c02528b-3781-4e32-929c-618d81f52cf3 \
|
||||
--remove-volumes 8c0f6ae4-efb1-458f-a8fc-9da2afcc5fb1,a245423f-bb99-4f94-8c8c-02806f9246d8 \
|
||||
1de80c27-3b2f-47a6-91a7-e867cbe36462
|
||||
|
||||
**Create a consistency group from the snapshot of another consistency
|
||||
group**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder consisgroup-create-from-src
|
||||
[--cgsnapshot CGSNAPSHOT]
|
||||
[--name NAME]
|
||||
[--description DESCRIPTION]
|
||||
|
||||
The parameter ``CGSNAPSHOT`` is a name or UUID of a snapshot of a
|
||||
consistency group:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder consisgroup-create-from-src \
|
||||
--cgsnapshot 6d9dfb7d-079a-471e-b75a-6e9185ba0c38 \
|
||||
--name 'new cg' --description 'new cg from cgsnapshot'
|
||||
|
||||
**Create a consistency group from a source consistency group**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder consisgroup-create-from-src
|
||||
[--source-cg SOURCECG]
|
||||
[--name NAME]
|
||||
[--description DESCRIPTION]
|
||||
|
||||
The parameter ``SOURCECG`` is a name or UUID of a source
|
||||
consistency group:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder consisgroup-create-from-src \
|
||||
--source-cg 6d9dfb7d-079a-471e-b75a-6e9185ba0c38 \
|
||||
--name 'new cg' --description 'new cloned cg'
|
@ -1,373 +0,0 @@
|
||||
.. _filter_weigh_scheduler:
|
||||
|
||||
==========================================================
|
||||
Configure and use driver filter and weighing for scheduler
|
||||
==========================================================
|
||||
|
||||
OpenStack Block Storage enables you to choose a volume back end based on
|
||||
back-end specific properties by using the DriverFilter and
|
||||
GoodnessWeigher for the scheduler. The driver filter and weigher
|
||||
scheduling can help ensure that the scheduler chooses the best back end
|
||||
based on requested volume properties as well as various back-end
|
||||
specific properties.
|
||||
|
||||
What is driver filter and weigher and when to use it
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The driver filter and weigher gives you the ability to more finely
|
||||
control how the OpenStack Block Storage scheduler chooses the best back
|
||||
end to use when handling a volume request. One example scenario where
|
||||
using the driver filter and weigher can be if a back end that utilizes
|
||||
thin-provisioning is used. The default filters use the ``free capacity``
|
||||
property to determine the best back end, but that is not always perfect.
|
||||
If a back end has the ability to provide a more accurate back-end
|
||||
specific value you can use that as part of the weighing. Another example
|
||||
of when the driver filter and weigher can prove useful is if a back end
|
||||
exists where there is a hard limit of 1000 volumes. The maximum volume
|
||||
size is 500 GB. Once 75% of the total space is occupied the performance
|
||||
of the back end degrades. The driver filter and weigher can provide a
|
||||
way for these limits to be checked for.
|
||||
|
||||
Enable driver filter and weighing
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To enable the driver filter, set the ``scheduler_default_filters`` option in
|
||||
the ``cinder.conf`` file to ``DriverFilter`` or add it to the list if
|
||||
other filters are already present.
|
||||
|
||||
To enable the goodness filter as a weigher, set the
|
||||
``scheduler_default_weighers`` option in the ``cinder.conf`` file to
|
||||
``GoodnessWeigher`` or add it to the list if other weighers are already
|
||||
present.
|
||||
|
||||
You can choose to use the ``DriverFilter`` without the
|
||||
``GoodnessWeigher`` or vice-versa. The filter and weigher working
|
||||
together, however, create the most benefits when helping the scheduler
|
||||
choose an ideal back end.
|
||||
|
||||
.. important::
|
||||
|
||||
The support for the ``DriverFilter`` and ``GoodnessWeigher`` is
|
||||
optional for back ends. If you are using a back end that does not
|
||||
support the filter and weigher functionality you may not get the
|
||||
full benefit.
|
||||
|
||||
Example ``cinder.conf`` configuration file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
scheduler_default_filters = DriverFilter
|
||||
scheduler_default_weighers = GoodnessWeigher
|
||||
|
||||
.. note::
|
||||
|
||||
It is useful to use the other filters and weighers available in
|
||||
OpenStack in combination with these custom ones. For example, the
|
||||
``CapacityFilter`` and ``CapacityWeigher`` can be combined with
|
||||
these.
|
||||
|
||||
Defining your own filter and goodness functions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can define your own filter and goodness functions through the use of
|
||||
various properties that OpenStack Block Storage has exposed. Properties
|
||||
exposed include information about the volume request being made,
|
||||
``volume_type`` settings, and back-end specific information about drivers.
|
||||
All of these allow for a lot of control over how the ideal back end for
|
||||
a volume request will be decided.
|
||||
|
||||
The ``filter_function`` option is a string defining an equation that
|
||||
will determine whether a back end should be considered as a potential
|
||||
candidate in the scheduler.
|
||||
|
||||
The ``goodness_function`` option is a string defining an equation that
|
||||
will rate the quality of the potential host (0 to 100, 0 lowest, 100
|
||||
highest).
|
||||
|
||||
.. important::
|
||||
|
||||
The drive filter and weigher will use default values for filter and
|
||||
goodness functions for each back end if you do not define them
|
||||
yourself. If complete control is desired then a filter and goodness
|
||||
function should be defined for each of the back ends in
|
||||
the ``cinder.conf`` file.
|
||||
|
||||
|
||||
Supported operations in filter and goodness functions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Below is a table of all the operations currently usable in custom filter
|
||||
and goodness functions created by you:
|
||||
|
||||
+--------------------------------+-------------------------+
|
||||
| Operations | Type |
|
||||
+================================+=========================+
|
||||
| +, -, \*, /, ^ | standard math |
|
||||
+--------------------------------+-------------------------+
|
||||
| not, and, or, &, \|, ! | logic |
|
||||
+--------------------------------+-------------------------+
|
||||
| >, >=, <, <=, ==, <>, != | equality |
|
||||
+--------------------------------+-------------------------+
|
||||
| +, - | sign |
|
||||
+--------------------------------+-------------------------+
|
||||
| x ? a : b | ternary |
|
||||
+--------------------------------+-------------------------+
|
||||
| abs(x), max(x, y), min(x, y) | math helper functions |
|
||||
+--------------------------------+-------------------------+
|
||||
|
||||
.. caution::
|
||||
|
||||
Syntax errors you define in filter or goodness strings
|
||||
are thrown at a volume request time.
|
||||
|
||||
Available properties when creating custom functions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
There are various properties that can be used in either the
|
||||
``filter_function`` or the ``goodness_function`` strings. The properties allow
|
||||
access to volume info, qos settings, extra specs, and so on.
|
||||
|
||||
The following properties and their sub-properties are currently
|
||||
available for use:
|
||||
|
||||
Host stats for a back end
|
||||
-------------------------
|
||||
host
|
||||
The host's name
|
||||
|
||||
volume\_backend\_name
|
||||
The volume back end name
|
||||
|
||||
vendor\_name
|
||||
The vendor name
|
||||
|
||||
driver\_version
|
||||
The driver version
|
||||
|
||||
storage\_protocol
|
||||
The storage protocol
|
||||
|
||||
QoS\_support
|
||||
Boolean signifying whether QoS is supported
|
||||
|
||||
total\_capacity\_gb
|
||||
The total capacity in GB
|
||||
|
||||
allocated\_capacity\_gb
|
||||
The allocated capacity in GB
|
||||
|
||||
reserved\_percentage
|
||||
The reserved storage percentage
|
||||
|
||||
Capabilities specific to a back end
|
||||
-----------------------------------
|
||||
|
||||
These properties are determined by the specific back end
|
||||
you are creating filter and goodness functions for. Some back ends
|
||||
may not have any properties available here.
|
||||
|
||||
Requested volume properties
|
||||
---------------------------
|
||||
|
||||
status
|
||||
Status for the requested volume
|
||||
|
||||
volume\_type\_id
|
||||
The volume type ID
|
||||
|
||||
display\_name
|
||||
The display name of the volume
|
||||
|
||||
volume\_metadata
|
||||
Any metadata the volume has
|
||||
|
||||
reservations
|
||||
Any reservations the volume has
|
||||
|
||||
user\_id
|
||||
The volume's user ID
|
||||
|
||||
attach\_status
|
||||
The attach status for the volume
|
||||
|
||||
display\_description
|
||||
The volume's display description
|
||||
|
||||
id
|
||||
The volume's ID
|
||||
|
||||
replication\_status
|
||||
The volume's replication status
|
||||
|
||||
snapshot\_id
|
||||
The volume's snapshot ID
|
||||
|
||||
encryption\_key\_id
|
||||
The volume's encryption key ID
|
||||
|
||||
source\_volid
|
||||
The source volume ID
|
||||
|
||||
volume\_admin\_metadata
|
||||
Any admin metadata for this volume
|
||||
|
||||
source\_replicaid
|
||||
The source replication ID
|
||||
|
||||
consistencygroup\_id
|
||||
The consistency group ID
|
||||
|
||||
size
|
||||
The size of the volume in GB
|
||||
|
||||
metadata
|
||||
General metadata
|
||||
|
||||
The property most used from here will most likely be the ``size`` sub-property.
|
||||
|
||||
Extra specs for the requested volume type
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
View the available properties for volume types by running:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder extra-specs-list
|
||||
|
||||
Current QoS specs for the requested volume type
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
View the available properties for volume types by running:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume qos list
|
||||
|
||||
In order to access these properties in a custom string use the following
|
||||
format:
|
||||
|
||||
``<property>.<sub_property>``
|
||||
|
||||
Driver filter and weigher usage examples
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Below are examples for using the filter and weigher separately,
|
||||
together, and using driver-specific properties.
|
||||
|
||||
Example ``cinder.conf`` file configuration for customizing the filter
|
||||
function:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[default]
|
||||
scheduler_default_filters = DriverFilter
|
||||
enabled_backends = lvm-1, lvm-2
|
||||
|
||||
[lvm-1]
|
||||
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_backend_name = sample_LVM01
|
||||
filter_function = "volume.size < 10"
|
||||
|
||||
[lvm-2]
|
||||
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_backend_name = sample_LVM02
|
||||
filter_function = "volume.size >= 10"
|
||||
|
||||
The above example will filter volumes to different back ends depending
|
||||
on the size of the requested volume. Default OpenStack Block Storage
|
||||
scheduler weighing is done. Volumes with a size less than 10 GB are sent
|
||||
to lvm-1 and volumes with a size greater than or equal to 10 GB are sent
|
||||
to lvm-2.
|
||||
|
||||
Example ``cinder.conf`` file configuration for customizing the goodness
|
||||
function:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[default]
|
||||
scheduler_default_weighers = GoodnessWeigher
|
||||
enabled_backends = lvm-1, lvm-2
|
||||
|
||||
[lvm-1]
|
||||
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_backend_name = sample_LVM01
|
||||
goodness_function = "(volume.size < 5) ? 100 : 50"
|
||||
|
||||
[lvm-2]
|
||||
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_backend_name = sample_LVM02
|
||||
goodness_function = "(volume.size >= 5) ? 100 : 25"
|
||||
|
||||
The above example will determine the goodness rating of a back end based
|
||||
off of the requested volume's size. Default OpenStack Block Storage
|
||||
scheduler filtering is done. The example shows how the ternary if
|
||||
statement can be used in a filter or goodness function. If a requested
|
||||
volume is of size 10 GB then lvm-1 is rated as 50 and lvm-2 is rated as
|
||||
100. In this case lvm-2 wins. If a requested volume is of size 3 GB then
|
||||
lvm-1 is rated 100 and lvm-2 is rated 25. In this case lvm-1 would win.
|
||||
|
||||
Example ``cinder.conf`` file configuration for customizing both the
|
||||
filter and goodness functions:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[default]
|
||||
scheduler_default_filters = DriverFilter
|
||||
scheduler_default_weighers = GoodnessWeigher
|
||||
enabled_backends = lvm-1, lvm-2
|
||||
|
||||
[lvm-1]
|
||||
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_backend_name = sample_LVM01
|
||||
filter_function = "stats.total_capacity_gb < 500"
|
||||
goodness_function = "(volume.size < 25) ? 100 : 50"
|
||||
|
||||
[lvm-2]
|
||||
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_backend_name = sample_LVM02
|
||||
filter_function = "stats.total_capacity_gb >= 500"
|
||||
goodness_function = "(volume.size >= 25) ? 100 : 75"
|
||||
|
||||
The above example combines the techniques from the first two examples.
|
||||
The best back end is now decided based off of the total capacity of the
|
||||
back end and the requested volume's size.
|
||||
|
||||
Example ``cinder.conf`` file configuration for accessing driver specific
|
||||
properties:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[default]
|
||||
scheduler_default_filters = DriverFilter
|
||||
scheduler_default_weighers = GoodnessWeigher
|
||||
enabled_backends = lvm-1,lvm-2,lvm-3
|
||||
|
||||
[lvm-1]
|
||||
volume_group = stack-volumes-lvmdriver-1
|
||||
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_backend_name = lvmdriver-1
|
||||
filter_function = "volume.size < 5"
|
||||
goodness_function = "(capabilities.total_volumes < 3) ? 100 : 50"
|
||||
|
||||
[lvm-2]
|
||||
volume_group = stack-volumes-lvmdriver-2
|
||||
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_backend_name = lvmdriver-2
|
||||
filter_function = "volumes.size < 5"
|
||||
goodness_function = "(capabilities.total_volumes < 8) ? 100 : 50"
|
||||
|
||||
[lvm-3]
|
||||
volume_group = stack-volumes-lvmdriver-3
|
||||
volume_driver = cinder.volume.drivers.LVMVolumeDriver
|
||||
volume_backend_name = lvmdriver-3
|
||||
goodness_function = "55"
|
||||
|
||||
The above is an example of how back-end specific properties can be used
|
||||
in the filter and goodness functions. In this example the LVM driver's
|
||||
``total_volumes`` capability is being used to determine which host gets
|
||||
used during a volume request. In the above example, lvm-1 and lvm-2 will
|
||||
handle volume requests for all volumes with a size less than 5 GB. The
|
||||
lvm-1 host will have priority until it contains three or more volumes.
|
||||
After than lvm-2 will have priority until it contains eight or more
|
||||
volumes. The lvm-3 will collect all volumes greater or equal to 5 GB as
|
||||
well as all volumes once lvm-1 and lvm-2 lose priority.
|
@ -1,294 +0,0 @@
|
||||
.. _get_capabilities:
|
||||
|
||||
|
||||
================
|
||||
Get capabilities
|
||||
================
|
||||
|
||||
When an administrator configures ``volume type`` and ``extra specs`` of storage
|
||||
on the back end, the administrator has to read the right documentation that
|
||||
corresponds to the version of the storage back end. Deep knowledge of
|
||||
storage is also required.
|
||||
|
||||
OpenStack Block Storage enables administrators to configure ``volume type``
|
||||
and ``extra specs`` without specific knowledge of the storage back end.
|
||||
|
||||
.. note::
|
||||
|
||||
* ``Volume Type``: A group of volume policies.
|
||||
* ``Extra Specs``: The definition of a volume type. This is a group of
|
||||
policies. For example, provision type, QOS that will be used to
|
||||
define a volume at creation time.
|
||||
* ``Capabilities``: What the current deployed back end in Cinder is able
|
||||
to do. These correspond to extra specs.
|
||||
|
||||
Usage of cinder client
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
When an administrator wants to define new volume types for their
|
||||
OpenStack cloud, the administrator would fetch a list of ``capabilities``
|
||||
for a particular back end using the cinder client.
|
||||
|
||||
First, get a list of the services:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume service list
|
||||
+------------------+-------------------+------+---------+-------+----------------------------+
|
||||
| Binary | Host | Zone | Status | State | Updated At |
|
||||
+------------------+-------------------+------+---------+-------+----------------------------+
|
||||
| cinder-scheduler | controller | nova | enabled | up | 2016-10-24T13:53:35.000000 |
|
||||
| cinder-volume | block1@ABC-driver | nova | enabled | up | 2016-10-24T13:53:35.000000 |
|
||||
+------------------+-------------------+------+---------+-------+----------------------------+
|
||||
|
||||
With one of the listed hosts, pass that to ``get-capabilities``, then
|
||||
the administrator can obtain volume stats and also back end ``capabilities``
|
||||
as listed below.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder get-capabilities block1@ABC-driver
|
||||
+---------------------+----------------------------------------------+
|
||||
| Volume stats | Value |
|
||||
+---------------------+----------------------------------------------+
|
||||
| description | None |
|
||||
| display_name | Capabilities of Cinder Vendor ABC driver |
|
||||
| driver_version | 2.0.0 |
|
||||
| namespace | OS::Storage::Capabilities::block1@ABC-driver |
|
||||
| pool_name | None |
|
||||
| replication_targets | [] |
|
||||
| storage_protocol | iSCSI |
|
||||
| vendor_name | Vendor ABC |
|
||||
| visibility | pool |
|
||||
| volume_backend_name | ABC-driver |
|
||||
+---------------------+----------------------------------------------+
|
||||
+----------------------+-----------------------------------------------------+
|
||||
| Backend properties | Value |
|
||||
+----------------------+-----------------------------------------------------+
|
||||
| compression | {u'type':u'boolean', u'title':u'Compression', ...} |
|
||||
| ABC:compression_type | {u'enum':u'['lossy', 'lossless', 'special']', ...} |
|
||||
| qos | {u'type':u'boolean', u'title':u'QoS', ...} |
|
||||
| replication | {u'type':u'boolean', u'title':u'Replication', ...} |
|
||||
| thin_provisioning | {u'type':u'boolean', u'title':u'Thin Provisioning'} |
|
||||
| ABC:minIOPS | {u'type':u'integer', u'title':u'Minimum IOPS QoS',} |
|
||||
| ABC:maxIOPS | {u'type':u'integer', u'title':u'Maximum IOPS QoS',} |
|
||||
| ABC:burstIOPS | {u'type':u'integer', u'title':u'Burst IOPS QoS',..} |
|
||||
+----------------------+-----------------------------------------------------+
|
||||
|
||||
Disable a service
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
When an administrator wants to disable a service, identify the Binary
|
||||
and the Host of the service. Use the :command:` openstack volume service set`
|
||||
command combined with the Binary and Host to disable the service:
|
||||
|
||||
#. Determine the binary and host of the service you want to remove
|
||||
initially.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume service list
|
||||
+------------------+----------------------+------+---------+-------+----------------------------+
|
||||
| Binary | Host | Zone | Status | State | Updated At |
|
||||
+------------------+----------------------+------+---------+-------+----------------------------+
|
||||
| cinder-scheduler | devstack | nova | enabled | up | 2016-10-24T13:53:35.000000 |
|
||||
| cinder-volume | devstack@lvmdriver-1 | nova | enabled | up | 2016-10-24T13:53:35.000000 |
|
||||
+------------------+----------------------+------+---------+-------+----------------------------+
|
||||
|
||||
#. Disable the service using the Binary and Host name, placing the Host
|
||||
before the Binary name.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume service set --disable HOST_NAME BINARY_NAME
|
||||
|
||||
#. Remove the service from the database.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder-manage service remove BINARY_NAME HOST_NAME
|
||||
|
||||
Usage of REST API
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
New endpoint to ``get capabilities`` list for specific storage back end
|
||||
is also available. For more details, refer to the Block Storage API reference.
|
||||
|
||||
API request:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
GET /v2/{tenant_id}/capabilities/{hostname}
|
||||
|
||||
Example of return value:
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"namespace": "OS::Storage::Capabilities::block1@ABC-driver",
|
||||
"volume_backend_name": "ABC-driver",
|
||||
"pool_name": "pool",
|
||||
"driver_version": "2.0.0",
|
||||
"storage_protocol": "iSCSI",
|
||||
"display_name": "Capabilities of Cinder Vendor ABC driver",
|
||||
"description": "None",
|
||||
"visibility": "public",
|
||||
"properties": {
|
||||
"thin_provisioning": {
|
||||
"title": "Thin Provisioning",
|
||||
"description": "Sets thin provisioning.",
|
||||
"type": "boolean"
|
||||
},
|
||||
"compression": {
|
||||
"title": "Compression",
|
||||
"description": "Enables compression.",
|
||||
"type": "boolean"
|
||||
},
|
||||
"ABC:compression_type": {
|
||||
"title": "Compression type",
|
||||
"description": "Specifies compression type.",
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"lossy", "lossless", "special"
|
||||
]
|
||||
},
|
||||
"replication": {
|
||||
"title": "Replication",
|
||||
"description": "Enables replication.",
|
||||
"type": "boolean"
|
||||
},
|
||||
"qos": {
|
||||
"title": "QoS",
|
||||
"description": "Enables QoS.",
|
||||
"type": "boolean"
|
||||
},
|
||||
"ABC:minIOPS": {
|
||||
"title": "Minimum IOPS QoS",
|
||||
"description": "Sets minimum IOPS if QoS is enabled.",
|
||||
"type": "integer"
|
||||
},
|
||||
"ABC:maxIOPS": {
|
||||
"title": "Maximum IOPS QoS",
|
||||
"description": "Sets maximum IOPS if QoS is enabled.",
|
||||
"type": "integer"
|
||||
},
|
||||
"ABC:burstIOPS": {
|
||||
"title": "Burst IOPS QoS",
|
||||
"description": "Sets burst IOPS if QoS is enabled.",
|
||||
"type": "integer"
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
Usage of volume type access extension
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Some volume types should be restricted only. For example, test volume types
|
||||
where you are testing a new technology or ultra high performance volumes
|
||||
(for special cases) where you do not want most users to be able to select
|
||||
these volumes. An administrator/operator can then define private volume types
|
||||
using cinder client.
|
||||
Volume type access extension adds the ability to manage volume type access.
|
||||
Volume types are public by default. Private volume types can be created by
|
||||
setting the ``--private`` parameter at creation time. Access to a
|
||||
private volume type can be controlled by adding or removing a project from it.
|
||||
Private volume types without projects are only visible by users with the
|
||||
admin role/context.
|
||||
|
||||
Create a public volume type by setting ``--public`` parameter:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create vol_Type1 --description test1 --public
|
||||
+-------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+--------------------------------------+
|
||||
| description | test1 |
|
||||
| id | b7dbed9e-de78-49f8-a840-651ae7308592 |
|
||||
| is_public | True |
|
||||
| name | vol_Type1 |
|
||||
+-------------+--------------------------------------+
|
||||
|
||||
Create a private volume type by setting ``--private`` parameter:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create vol_Type2 --description test2 --private
|
||||
+-------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+--------------------------------------+
|
||||
| description | test2 |
|
||||
| id | 154baa73-d2c4-462f-8258-a2df251b0d39 |
|
||||
| is_public | False |
|
||||
| name | vol_Type2 |
|
||||
+-------------+--------------------------------------+
|
||||
|
||||
Get a list of the volume types:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type list
|
||||
+--------------------------------------+-------------+
|
||||
| ID | Name |
|
||||
+--------------------------------------+-------------+
|
||||
| 0a948c84-bad5-4fba-88a2-c062006e4f6b | vol_Type1 |
|
||||
| 87e5be6f-9491-4ea5-9906-9ac56494bb91 | lvmdriver-1 |
|
||||
| fd508846-213f-4a07-aaf2-40518fb9a23f | vol_Type2 |
|
||||
+--------------------------------------+-------------+
|
||||
|
||||
Get a list of the projects:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack project list
|
||||
+----------------------------------+--------------------+
|
||||
| ID | Name |
|
||||
+----------------------------------+--------------------+
|
||||
| 4105ead90a854100ab6b121266707f2b | alt_demo |
|
||||
| 4a22a545cedd4fcfa9836eb75e558277 | admin |
|
||||
| 71f9cdb1a3ab4b8e8d07d347a2e146bb | service |
|
||||
| c4860af62ffe465e99ed1bc08ef6082e | demo |
|
||||
| e4b648ba5108415cb9e75bff65fa8068 | invisible_to_admin |
|
||||
+----------------------------------+--------------------+
|
||||
|
||||
Add volume type access for the given demo project, using its project-id:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type set --project c4860af62ffe465e99ed1bc08ef6082e \
|
||||
vol_Type2
|
||||
|
||||
List the access information about the given volume type:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type show vol_Type2
|
||||
+--------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------------+--------------------------------------+
|
||||
| access_project_ids | c4860af62ffe465e99ed1bc08ef6082e |
|
||||
| description | |
|
||||
| id | fd508846-213f-4a07-aaf2-40518fb9a23f |
|
||||
| is_public | False |
|
||||
| name | vol_Type2 |
|
||||
| properties | |
|
||||
| qos_specs_id | None |
|
||||
+--------------------+--------------------------------------+
|
||||
|
||||
Remove volume type access for the given project:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type unset --project c4860af62ffe465e99ed1bc08ef6082e \
|
||||
vol_Type2
|
||||
$ openstack volume type show vol_Type2
|
||||
+--------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------------+--------------------------------------+
|
||||
| access_project_ids | |
|
||||
| description | |
|
||||
| id | fd508846-213f-4a07-aaf2-40518fb9a23f |
|
||||
| is_public | False |
|
||||
| name | vol_Type2 |
|
||||
| properties | |
|
||||
| qos_specs_id | None |
|
||||
+--------------------+--------------------------------------+
|
@ -1,206 +0,0 @@
|
||||
==============================
|
||||
Configure a GlusterFS back end
|
||||
==============================
|
||||
|
||||
This section explains how to configure OpenStack Block Storage to use
|
||||
GlusterFS as a back end. You must be able to access the GlusterFS shares
|
||||
from the server that hosts the ``cinder`` volume service.
|
||||
|
||||
.. note::
|
||||
|
||||
The GlusterFS volume driver, which was deprecated in the Newton release,
|
||||
has been removed in the Ocata release.
|
||||
|
||||
.. note::
|
||||
|
||||
The cinder volume service is named ``openstack-cinder-volume`` on the
|
||||
following distributions:
|
||||
|
||||
* CentOS
|
||||
|
||||
* Fedora
|
||||
|
||||
* openSUSE
|
||||
|
||||
* Red Hat Enterprise Linux
|
||||
|
||||
* SUSE Linux Enterprise
|
||||
|
||||
In Ubuntu and Debian distributions, the ``cinder`` volume service is
|
||||
named ``cinder-volume``.
|
||||
|
||||
Mounting GlusterFS volumes requires utilities and libraries from the
|
||||
``glusterfs-fuse`` package. This package must be installed on all systems
|
||||
that will access volumes backed by GlusterFS.
|
||||
|
||||
.. note::
|
||||
|
||||
The utilities and libraries required for mounting GlusterFS volumes on
|
||||
Ubuntu and Debian distributions are available from the ``glusterfs-client``
|
||||
package instead.
|
||||
|
||||
For information on how to install and configure GlusterFS, refer to the
|
||||
`GlusterFS Documentation`_ page.
|
||||
|
||||
**Configure GlusterFS for OpenStack Block Storage**
|
||||
|
||||
The GlusterFS server must also be configured accordingly in order to allow
|
||||
OpenStack Block Storage to use GlusterFS shares:
|
||||
|
||||
#. Log in as ``root`` to the GlusterFS server.
|
||||
|
||||
#. Set each Gluster volume to use the same UID and GID as the ``cinder`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# gluster volume set VOL_NAME storage.owner-uid CINDER_UID
|
||||
# gluster volume set VOL_NAME storage.owner-gid CINDER_GID
|
||||
|
||||
|
||||
Where:
|
||||
|
||||
* VOL_NAME is the Gluster volume name.
|
||||
|
||||
* CINDER_UID is the UID of the ``cinder`` user.
|
||||
|
||||
* CINDER_GID is the GID of the ``cinder`` user.
|
||||
|
||||
.. note::
|
||||
|
||||
The default UID and GID of the ``cinder`` user is 165 on
|
||||
most distributions.
|
||||
|
||||
#. Configure each Gluster volume to accept ``libgfapi`` connections.
|
||||
To do this, set each Gluster volume to allow insecure ports:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# gluster volume set VOL_NAME server.allow-insecure on
|
||||
|
||||
#. Enable client connections from unprivileged ports. To do this,
|
||||
add the following line to ``/etc/glusterfs/glusterd.vol``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
option rpc-auth-allow-insecure on
|
||||
|
||||
#. Restart the ``glusterd`` service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service glusterd restart
|
||||
|
||||
|
||||
**Configure Block Storage to use a GlusterFS back end**
|
||||
|
||||
After you configure the GlusterFS service, complete these steps:
|
||||
|
||||
#. Log in as ``root`` to the system hosting the Block Storage service.
|
||||
|
||||
#. Create a text file named ``glusterfs`` in ``/etc/cinder/`` directory.
|
||||
|
||||
#. Add an entry to ``/etc/cinder/glusterfs`` for each GlusterFS
|
||||
share that OpenStack Block Storage should use for back end storage.
|
||||
Each entry should be a separate line, and should use the following
|
||||
format:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
HOST:/VOL_NAME
|
||||
|
||||
|
||||
Where:
|
||||
|
||||
* HOST is the IP address or host name of the Red Hat Storage server.
|
||||
|
||||
* VOL_NAME is the name of an existing and accessible volume on the
|
||||
GlusterFS server.
|
||||
|
||||
|
|
||||
|
||||
Optionally, if your environment requires additional mount options for
|
||||
a share, you can add them to the share's entry:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
HOST:/VOL_NAME -o OPTIONS
|
||||
|
||||
Replace OPTIONS with a comma-separated list of mount options.
|
||||
|
||||
#. Set ``/etc/cinder/glusterfs`` to be owned by the root user
|
||||
and the ``cinder`` group:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# chown root:cinder /etc/cinder/glusterfs
|
||||
|
||||
#. Set ``/etc/cinder/glusterfs`` to be readable by members of
|
||||
the ``cinder`` group:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# chmod 0640 /etc/cinder/glusterfs
|
||||
|
||||
#. Configure OpenStack Block Storage to use the ``/etc/cinder/glusterfs``
|
||||
file created earlier. To do so, open the ``/etc/cinder/cinder.conf``
|
||||
configuration file and set the ``glusterfs_shares_config`` configuration
|
||||
key to ``/etc/cinder/glusterfs``.
|
||||
|
||||
On distributions that include openstack-config, you can configure this
|
||||
by running the following command instead:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# openstack-config --set /etc/cinder/cinder.conf \
|
||||
DEFAULT glusterfs_shares_config /etc/cinder/glusterfs
|
||||
|
||||
The following distributions include ``openstack-config``:
|
||||
|
||||
* CentOS
|
||||
|
||||
* Fedora
|
||||
|
||||
* openSUSE
|
||||
|
||||
* Red Hat Enterprise Linux
|
||||
|
||||
* SUSE Linux Enterprise
|
||||
|
||||
|
|
||||
|
||||
#. Configure OpenStack Block Storage to use the correct volume driver,
|
||||
namely ``cinder.volume.drivers.glusterfs.GlusterfsDriver``. To do so,
|
||||
open the ``/etc/cinder/cinder.conf`` configuration file and set
|
||||
the ``volume_driver`` configuration key to
|
||||
``cinder.volume.drivers.glusterfs.GlusterfsDriver``.
|
||||
|
||||
On distributions that include ``openstack-config``, you can configure
|
||||
this by running the following command instead:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# openstack-config --set /etc/cinder/cinder.conf \
|
||||
DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
|
||||
|
||||
#. You can now restart the service to apply the configuration.
|
||||
|
||||
|
||||
OpenStack Block Storage is now configured to use a GlusterFS back end.
|
||||
|
||||
.. warning::
|
||||
|
||||
If a client host has SELinux enabled, the ``virt_use_fusefs`` boolean
|
||||
should also be enabled if the host requires access to GlusterFS volumes
|
||||
on an instance. To enable this Boolean, run the following command as
|
||||
the ``root`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# setsebool -P virt_use_fusefs on
|
||||
|
||||
This command also makes the Boolean persistent across reboots. Run
|
||||
this command on all client hosts that require access to GlusterFS
|
||||
volumes on an instance. This includes all compute nodes.
|
||||
|
||||
.. Links
|
||||
.. _`GlusterFS Documentation`: https://gluster.readthedocs.io/en/latest/
|
@ -1,24 +0,0 @@
|
||||
.. _glusterfs_removal:
|
||||
|
||||
===============================================
|
||||
Gracefully remove a GlusterFS volume from usage
|
||||
===============================================
|
||||
|
||||
Configuring the ``cinder`` volume service to use GlusterFS involves creating a
|
||||
shares file (for example, ``/etc/cinder/glusterfs``). This shares file
|
||||
lists each GlusterFS volume (with its corresponding storage server) that
|
||||
the ``cinder`` volume service can use for back end storage.
|
||||
|
||||
To remove a GlusterFS volume from usage as a back end, delete the volume's
|
||||
corresponding entry from the shares file. After doing so, restart the Block
|
||||
Storage services.
|
||||
|
||||
Restarting the Block Storage services will prevent the ``cinder`` volume
|
||||
service from exporting the deleted GlusterFS volume. This will prevent any
|
||||
instances from mounting the volume from that point onwards.
|
||||
|
||||
However, the removed GlusterFS volume might still be mounted on an instance
|
||||
at this point. Typically, this is the case when the volume was already
|
||||
mounted while its entry was deleted from the shares file.
|
||||
Whenever this occurs, you will have to unmount the volume as normal after
|
||||
the Block Storage services are restarted.
|
@ -1,380 +0,0 @@
|
||||
=====================
|
||||
Generic volume groups
|
||||
=====================
|
||||
|
||||
Generic volume group support is available in OpenStack Block Storage (cinder)
|
||||
since the Newton release. The support is added for creating group types and
|
||||
group specs, creating groups of volumes, and creating snapshots of groups.
|
||||
The group operations can be performed using the Block Storage command line.
|
||||
|
||||
A group type is a type for a group just like a volume type for a volume.
|
||||
A group type can also have associated group specs similar to extra specs
|
||||
for a volume type.
|
||||
|
||||
In cinder, there is a group construct called `consistency group`. Consistency
|
||||
groups only support consistent group snapshots and only a small number of
|
||||
drivers can support it. The following is a list of drivers that support
|
||||
consistency groups and the release when the support was added:
|
||||
|
||||
- Juno: EMC VNX
|
||||
|
||||
- Kilo: EMC VMAX, IBM (GPFS, Storwize, SVC, and XIV), ProphetStor, Pure
|
||||
|
||||
- Liberty: Dell Storage Center, EMC XtremIO, HPE 3Par and LeftHand
|
||||
|
||||
- Mitaka: EMC ScaleIO, NetApp Data ONTAP and E-Series, SolidFire
|
||||
|
||||
- Newton: CoprHD, FalconStor, Huawei
|
||||
|
||||
Consistency group cannot be extended easily to serve other purposes. A tenant
|
||||
may want to put volumes used in the same application together in a group so
|
||||
that it is easier to manage them together, and this group of volumes may or
|
||||
may not support consistent group snapshot. Generic volume group is introduced
|
||||
to solve this problem.
|
||||
|
||||
There is a plan to migrate existing consistency group operations to use
|
||||
generic volume group operations in future releases. More information can be
|
||||
found in `Cinder specs <https://github.com/openstack/cinder-specs/blob/master/specs/newton/group-snapshots.rst>`_.
|
||||
|
||||
.. note::
|
||||
|
||||
Only Block Storage V3 API supports groups. You can
|
||||
specify ``--os-volume-api-version 3.x`` when using the `cinder`
|
||||
command line for group operations where `3.x` contains a microversion value
|
||||
for that command. The generic volume group feature was completed in several
|
||||
patches. As a result, the minimum required microversion is different for
|
||||
group types, groups, and group snapshots APIs.
|
||||
|
||||
The following group type operations are supported:
|
||||
|
||||
- Create a group type.
|
||||
|
||||
- Delete a group type.
|
||||
|
||||
- Set group spec for a group type.
|
||||
|
||||
- Unset group spec for a group type.
|
||||
|
||||
- List group types.
|
||||
|
||||
- Show a group type details.
|
||||
|
||||
- Update a group.
|
||||
|
||||
- List group types and group specs.
|
||||
|
||||
The following group and group snapshot operations are supported:
|
||||
|
||||
- Create a group, given group type and volume types.
|
||||
|
||||
.. note::
|
||||
|
||||
A group must have one group type. A group can support more than one
|
||||
volume type. The scheduler is responsible for finding a back end that
|
||||
can support the given group type and volume types.
|
||||
|
||||
A group can only contain volumes hosted by the same back end.
|
||||
|
||||
A group is empty upon its creation. Volumes need to be created and added
|
||||
to it later.
|
||||
|
||||
- Show a group.
|
||||
|
||||
- List groups.
|
||||
|
||||
- Delete a group.
|
||||
|
||||
- Modify a group.
|
||||
|
||||
- Create a volume and add it to a group.
|
||||
|
||||
- Create a snapshot for a group.
|
||||
|
||||
- Show a group snapshot.
|
||||
|
||||
- List group snapshots.
|
||||
|
||||
- Delete a group snapshot.
|
||||
|
||||
- Create a group from a group snapshot.
|
||||
|
||||
- Create a group from a source group.
|
||||
|
||||
The following operations are not allowed if a volume is in a group:
|
||||
|
||||
- Volume migration.
|
||||
|
||||
- Volume retype.
|
||||
|
||||
- Volume deletion.
|
||||
|
||||
.. note::
|
||||
|
||||
A group has to be deleted as a whole with all the volumes.
|
||||
|
||||
The following operations are not allowed if a volume snapshot is in a
|
||||
group snapshot:
|
||||
|
||||
- Volume snapshot deletion.
|
||||
|
||||
.. note::
|
||||
|
||||
A group snapshot has to be deleted as a whole with all the volume
|
||||
snapshots.
|
||||
|
||||
The details of group type operations are shown in the following. The minimum
|
||||
microversion to support group type and group specs is 3.11:
|
||||
|
||||
**Create a group type**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cinder --os-volume-api-version 3.11 group-type-create
|
||||
[--description DESCRIPTION]
|
||||
[--is-public IS_PUBLIC]
|
||||
NAME
|
||||
|
||||
.. note::
|
||||
|
||||
The parameter ``NAME`` is required. The
|
||||
``--is-public IS_PUBLIC`` determines whether the group type is
|
||||
accessible to the public. It is ``True`` by default. By default, the
|
||||
policy on privileges for creating a group type is admin-only.
|
||||
|
||||
**Show a group type**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cinder --os-volume-api-version 3.11 group-type-show
|
||||
GROUP_TYPE
|
||||
|
||||
.. note::
|
||||
|
||||
The parameter ``GROUP_TYPE`` is the name or UUID of a group type.
|
||||
|
||||
**List group types**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cinder --os-volume-api-version 3.11 group-type-list
|
||||
|
||||
.. note::
|
||||
|
||||
Only admin can see private group types.
|
||||
|
||||
**Update a group type**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cinder --os-volume-api-version 3.11 group-type-update
|
||||
[--name NAME]
|
||||
[--description DESCRIPTION]
|
||||
[--is-public IS_PUBLIC]
|
||||
GROUP_TYPE_ID
|
||||
|
||||
.. note::
|
||||
|
||||
The parameter ``GROUP_TYPE_ID`` is the UUID of a group type. By default,
|
||||
the policy on privileges for updating a group type is admin-only.
|
||||
|
||||
**Delete group type or types**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cinder --os-volume-api-version 3.11 group-type-delete
|
||||
GROUP_TYPE [GROUP_TYPE ...]
|
||||
|
||||
.. note::
|
||||
|
||||
The parameter ``GROUP_TYPE`` is name or UUID of the group type or
|
||||
group types to be deleted. By default, the policy on privileges for
|
||||
deleting a group type is admin-only.
|
||||
|
||||
**Set or unset group spec for a group type**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cinder --os-volume-api-version 3.11 group-type-key
|
||||
GROUP_TYPE ACTION KEY=VALUE [KEY=VALUE ...]
|
||||
|
||||
.. note::
|
||||
|
||||
The parameter ``GROUP_TYPE`` is the name or UUID of a group type. Valid
|
||||
values for the parameter ``ACTION`` are ``set`` or ``unset``.
|
||||
``KEY=VALUE`` is the group specs key and value pair to set or unset.
|
||||
For unset, specify only the key. By default, the policy on privileges
|
||||
for setting or unsetting group specs key is admin-only.
|
||||
|
||||
**List group types and group specs**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cinder --os-volume-api-version 3.11 group-specs-list
|
||||
|
||||
.. note::
|
||||
|
||||
By default, the policy on privileges for seeing group specs is admin-only.
|
||||
|
||||
The details of group operations are shown in the following. The minimum
|
||||
microversion to support groups operations is 3.13.
|
||||
|
||||
**Create a group**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cinder --os-volume-api-version 3.13 group-create
|
||||
[--name NAME]
|
||||
[--description DESCRIPTION]
|
||||
[--availability-zone AVAILABILITY_ZONE]
|
||||
GROUP_TYPE VOLUME_TYPES
|
||||
|
||||
.. note::
|
||||
|
||||
The parameters ``GROUP_TYPE`` and ``VOLUME_TYPES`` are required.
|
||||
``GROUP_TYPE`` is the name or UUID of a group type. ``VOLUME_TYPES``
|
||||
can be a list of names or UUIDs of volume types separated by commas
|
||||
without spaces in between. For example,
|
||||
``volumetype1,volumetype2,volumetype3.``.
|
||||
|
||||
**Show a group**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cinder --os-volume-api-version 3.13 group-show
|
||||
GROUP
|
||||
|
||||
.. note::
|
||||
|
||||
The parameter ``GROUP`` is the name or UUID of a group.
|
||||
|
||||
**List groups**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cinder --os-volume-api-version 3.13 group-list
|
||||
[--all-tenants [<0|1>]]
|
||||
|
||||
.. note::
|
||||
|
||||
``--all-tenants`` specifies whether to list groups for all tenants.
|
||||
Only admin can use this option.
|
||||
|
||||
**Create a volume and add it to a group**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cinder --os-volume-api-version 3.13 create
|
||||
--volume-type VOLUME_TYPE
|
||||
--group-id GROUP_ID SIZE
|
||||
|
||||
.. note::
|
||||
|
||||
When creating a volume and adding it to a group, the parameters
|
||||
``VOLUME_TYPE`` and ``GROUP_ID`` must be provided. This is because a group
|
||||
can support more than one volume type.
|
||||
|
||||
**Delete a group**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cinder --os-volume-api-version 3.13 group-delete
|
||||
[--delete-volumes]
|
||||
GROUP [GROUP ...]
|
||||
|
||||
.. note::
|
||||
|
||||
``--delete-volumes`` allows or disallows groups to be deleted
|
||||
if they are not empty. If the group is empty, it can be deleted without
|
||||
``--delete-volumes``. If the group is not empty, the flag is
|
||||
required for it to be deleted. When the flag is specified, the group
|
||||
and all volumes in the group will be deleted.
|
||||
|
||||
**Modify a group**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cinder --os-volume-api-version 3.13 group-update
|
||||
[--name NAME]
|
||||
[--description DESCRIPTION]
|
||||
[--add-volumes UUID1,UUID2,......]
|
||||
[--remove-volumes UUID3,UUID4,......]
|
||||
GROUP
|
||||
|
||||
.. note::
|
||||
|
||||
The parameter ``UUID1,UUID2,......`` is the UUID of one or more volumes
|
||||
to be added to the group, separated by commas. Similarly the parameter
|
||||
``UUID3,UUID4,......`` is the UUID of one or more volumes to be removed
|
||||
from the group, separated by commas.
|
||||
|
||||
The details of group snapshots operations are shown in the following. The
|
||||
minimum microversion to support group snapshots operations is 3.14.
|
||||
|
||||
**Create a snapshot for a group**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cinder --os-volume-api-version 3.14 group-snapshot-create
|
||||
[--name NAME]
|
||||
[--description DESCRIPTION]
|
||||
GROUP
|
||||
|
||||
.. note::
|
||||
|
||||
The parameter ``GROUP`` is the name or UUID of a group.
|
||||
|
||||
**Show a group snapshot**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cinder --os-volume-api-version 3.14 group-snapshot-show
|
||||
GROUP_SNAPSHOT
|
||||
|
||||
.. note::
|
||||
|
||||
The parameter ``GROUP_SNAPSHOT`` is the name or UUID of a group snapshot.
|
||||
|
||||
**List group snapshots**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cinder --os-volume-api-version 3.14 group-snapshot-list
|
||||
[--all-tenants [<0|1>]]
|
||||
[--status STATUS]
|
||||
[--group-id GROUP_ID]
|
||||
|
||||
.. note::
|
||||
|
||||
``--all-tenants`` specifies whether to list group snapshots for
|
||||
all tenants. Only admin can use this option. ``--status STATUS``
|
||||
filters results by a status. ``--group-id GROUP_ID`` filters
|
||||
results by a group id.
|
||||
|
||||
**Delete group snapshot**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cinder --os-volume-api-version 3.14 group-snapshot-delete
|
||||
GROUP_SNAPSHOT [GROUP_SNAPSHOT ...]
|
||||
|
||||
.. note::
|
||||
|
||||
The parameter ``GROUP_SNAPSHOT`` specifies the name or UUID of one or more
|
||||
group snapshots to be deleted.
|
||||
|
||||
**Create a group from a group snapshot or a source group**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder --os-volume-api-version 3.14 group-create-from-src
|
||||
[--group-snapshot GROUP_SNAPSHOT]
|
||||
[--source-group SOURCE_GROUP]
|
||||
[--name NAME]
|
||||
[--description DESCRIPTION]
|
||||
|
||||
.. note::
|
||||
|
||||
The parameter ``GROUP_SNAPSHOT`` is a name or UUID of a group snapshot.
|
||||
The parameter ``SOURCE_GROUP`` is a name or UUID of a source group.
|
||||
Either ``GROUP_SNAPSHOT`` or ``SOURCE_GROUP`` must be specified, but not
|
||||
both.
|
@ -1,117 +0,0 @@
|
||||
.. _image_volume_cache:
|
||||
|
||||
|
||||
==================
|
||||
Image-Volume cache
|
||||
==================
|
||||
|
||||
OpenStack Block Storage has an optional Image cache which can dramatically
|
||||
improve the performance of creating a volume from an image. The improvement
|
||||
depends on many factors, primarily how quickly the configured back end can
|
||||
clone a volume.
|
||||
|
||||
When a volume is first created from an image, a new cached image-volume
|
||||
will be created that is owned by the Block Storage Internal Tenant. Subsequent
|
||||
requests to create volumes from that image will clone the cached version
|
||||
instead of downloading the image contents and copying data to the volume.
|
||||
|
||||
The cache itself is configurable per back end and will contain the most
|
||||
recently used images.
|
||||
|
||||
Configure the Internal Tenant
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Image-Volume cache requires that the Internal Tenant be configured for
|
||||
the Block Storage services. This project will own the cached image-volumes so
|
||||
they can be managed like normal users including tools like volume quotas. This
|
||||
protects normal users from having to see the cached image-volumes, but does
|
||||
not make them globally hidden.
|
||||
|
||||
To enable the Block Storage services to have access to an Internal Tenant, set
|
||||
the following options in the ``cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
cinder_internal_tenant_project_id = PROJECT_ID
|
||||
cinder_internal_tenant_user_id = USER_ID
|
||||
|
||||
An example ``cinder.conf`` configuration file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
cinder_internal_tenant_project_id = b7455b8974bb4064ad247c8f375eae6c
|
||||
cinder_internal_tenant_user_id = f46924c112a14c80ab0a24a613d95eef
|
||||
|
||||
.. note::
|
||||
|
||||
The actual user and project that are configured for the Internal Tenant do
|
||||
not require any special privileges. They can be the Block Storage service
|
||||
project or can be any normal project and user.
|
||||
|
||||
Configure the Image-Volume cache
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To enable the Image-Volume cache, set the following configuration option in
|
||||
the ``cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
image_volume_cache_enabled = True
|
||||
|
||||
.. note::
|
||||
|
||||
If you use Ceph as a back end, set the following configuration option in
|
||||
the ``cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ceph]
|
||||
image_volume_cache_enabled = True
|
||||
|
||||
This can be scoped per back end definition or in the default options.
|
||||
|
||||
There are optional configuration settings that can limit the size of the cache.
|
||||
These can also be scoped per back end or in the default options in
|
||||
the ``cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
image_volume_cache_max_size_gb = SIZE_GB
|
||||
image_volume_cache_max_count = MAX_COUNT
|
||||
|
||||
By default they will be set to 0, which means unlimited.
|
||||
|
||||
For example, a configuration which would limit the max size to 200 GB and 50
|
||||
cache entries will be configured as:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
image_volume_cache_max_size_gb = 200
|
||||
image_volume_cache_max_count = 50
|
||||
|
||||
Notifications
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Cache actions will trigger Telemetry messages. There are several that will be
|
||||
sent.
|
||||
|
||||
- ``image_volume_cache.miss`` - A volume is being created from an image which
|
||||
was not found in the cache. Typically this will mean a new cache entry would
|
||||
be created for it.
|
||||
|
||||
- ``image_volume_cache.hit`` - A volume is being created from an image which
|
||||
was found in the cache and the fast path can be taken.
|
||||
|
||||
- ``image_volume_cache.evict`` - A cached image-volume has been deleted from
|
||||
the cache.
|
||||
|
||||
|
||||
Managing cached Image-Volumes
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In normal usage there should be no need for manual intervention with the cache.
|
||||
The entries and their backing Image-Volumes are managed automatically.
|
||||
|
||||
If needed, you can delete these volumes manually to clear the cache.
|
||||
By using the standard volume deletion APIs, the Block Storage service will
|
||||
clean up correctly.
|
@ -1,12 +0,0 @@
|
||||
=====================
|
||||
Use LIO iSCSI support
|
||||
=====================
|
||||
|
||||
The default mode for the ``iscsi_helper`` tool is ``tgtadm``.
|
||||
To use LIO iSCSI, install the ``python-rtslib`` package, and set
|
||||
``iscsi_helper=lioadm`` in the ``cinder.conf`` file.
|
||||
|
||||
Once configured, you can use the :command:`cinder-rtstool` command to
|
||||
manage the volumes. This command enables you to create, delete, and
|
||||
verify volumes and determine targets and add iSCSI initiators to the
|
||||
system.
|
@ -1,82 +0,0 @@
|
||||
==============
|
||||
Manage volumes
|
||||
==============
|
||||
|
||||
The default OpenStack Block Storage service implementation is an
|
||||
iSCSI solution that uses :term:`Logical Volume Manager (LVM)` for Linux.
|
||||
|
||||
.. note::
|
||||
|
||||
The OpenStack Block Storage service is not a shared storage
|
||||
solution like a Network Attached Storage (NAS) of NFS volumes
|
||||
where you can attach a volume to multiple servers. With the
|
||||
OpenStack Block Storage service, you can attach a volume to only
|
||||
one instance at a time.
|
||||
|
||||
The OpenStack Block Storage service also provides drivers that
|
||||
enable you to use several vendors' back-end storage devices in
|
||||
addition to the base LVM implementation. These storage devices can
|
||||
also be used instead of the base LVM installation.
|
||||
|
||||
This high-level procedure shows you how to create and attach a volume
|
||||
to a server instance.
|
||||
|
||||
**To create and attach a volume to an instance**
|
||||
|
||||
#. Configure the OpenStack Compute and the OpenStack Block Storage
|
||||
services through the ``/etc/cinder/cinder.conf`` file.
|
||||
#. Use the :command:`openstack volume create` command to create a volume.
|
||||
This command creates an LV into the volume group (VG) ``cinder-volumes``.
|
||||
#. Use the :command:`openstack server add volume` command to attach the
|
||||
volume to an instance. This command creates a unique :term:`IQN <iSCSI
|
||||
Qualified Name (IQN)>` that is exposed to the compute node.
|
||||
|
||||
* The compute node, which runs the instance, now has an active
|
||||
iSCSI session and new local storage (usually a ``/dev/sdX``
|
||||
disk).
|
||||
* Libvirt uses that local storage as storage for the instance. The
|
||||
instance gets a new disk (usually a ``/dev/vdX`` disk).
|
||||
|
||||
For this particular walkthrough, one cloud controller runs
|
||||
``nova-api``, ``nova-scheduler``, ``nova-objectstore``,
|
||||
``nova-network`` and ``cinder-*`` services. Two additional compute
|
||||
nodes run ``nova-compute``. The walkthrough uses a custom
|
||||
partitioning scheme that carves out 60 GB of space and labels it as
|
||||
LVM. The network uses the ``FlatManager`` and ``NetworkManager``
|
||||
settings for OpenStack Compute.
|
||||
|
||||
The network mode does not interfere with OpenStack Block Storage
|
||||
operations, but you must set up networking for Block Storage to work.
|
||||
For details, see :ref:`networking`.
|
||||
|
||||
To set up Compute to use volumes, ensure that Block Storage is
|
||||
installed along with ``lvm2``. This guide describes how to
|
||||
troubleshoot your installation and back up your Compute volumes.
|
||||
|
||||
.. toctree::
|
||||
|
||||
blockstorage-boot-from-volume.rst
|
||||
blockstorage-nfs-backend.rst
|
||||
blockstorage-glusterfs-backend.rst
|
||||
blockstorage-multi-backend.rst
|
||||
blockstorage-backup-disks.rst
|
||||
blockstorage-volume-migration.rst
|
||||
blockstorage-glusterfs-removal.rst
|
||||
blockstorage-volume-backups.rst
|
||||
blockstorage-volume-backups-export-import.rst
|
||||
blockstorage-lio-iscsi-support.rst
|
||||
blockstorage-volume-number-weigher.rst
|
||||
blockstorage-consistency-groups.rst
|
||||
blockstorage-driver-filter-weighing.rst
|
||||
blockstorage-ratelimit-volume-copy-bandwidth.rst
|
||||
blockstorage-over-subscription.rst
|
||||
blockstorage-image-volume-cache.rst
|
||||
blockstorage-volume-backed-image.rst
|
||||
blockstorage-get-capabilities.rst
|
||||
blockstorage-groups.rst
|
||||
|
||||
.. note::
|
||||
|
||||
To enable the use of encrypted volumes, see the setup instructions in
|
||||
`Create an encrypted volume type
|
||||
<https://docs.openstack.org/admin-guide/dashboard-manage-volumes.html#create-an-encrypted-volume-type>`_.
|
@ -1,185 +0,0 @@
|
||||
.. _multi_backend:
|
||||
|
||||
====================================
|
||||
Configure multiple-storage back ends
|
||||
====================================
|
||||
|
||||
When you configure multiple-storage back ends, you can create several
|
||||
back-end storage solutions that serve the same OpenStack Compute
|
||||
configuration and one ``cinder-volume`` is launched for each back-end
|
||||
storage or back-end storage pool.
|
||||
|
||||
In a multiple-storage back-end configuration, each back end has a name
|
||||
(``volume_backend_name``). Several back ends can have the same name.
|
||||
In that case, the scheduler properly decides which back end the volume
|
||||
has to be created in.
|
||||
|
||||
The name of the back end is declared as an extra-specification of a
|
||||
volume type (such as, ``volume_backend_name=LVM``). When a volume
|
||||
is created, the scheduler chooses an appropriate back end to handle the
|
||||
request, according to the volume type specified by the user.
|
||||
|
||||
Enable multiple-storage back ends
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To enable a multiple-storage back ends, you must set the
|
||||
`enabled_backends` flag in the ``cinder.conf`` file.
|
||||
This flag defines the names (separated by a comma) of the configuration
|
||||
groups for the different back ends: one name is associated to one
|
||||
configuration group for a back end (such as, ``[lvmdriver-1]``).
|
||||
|
||||
.. note::
|
||||
|
||||
The configuration group name is not related to the ``volume_backend_name``.
|
||||
|
||||
.. note::
|
||||
|
||||
After setting the ``enabled_backends`` flag on an existing cinder
|
||||
service, and restarting the Block Storage services, the original ``host``
|
||||
service is replaced with a new host service. The new service appears
|
||||
with a name like ``host@backend``. Use:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder-manage volume update_host --currenthost CURRENTHOST --newhost CURRENTHOST@BACKEND
|
||||
|
||||
to convert current block devices to the new host name.
|
||||
|
||||
The options for a configuration group must be defined in the group
|
||||
(or default options are used). All the standard Block Storage
|
||||
configuration options (``volume_group``, ``volume_driver``, and so on)
|
||||
might be used in a configuration group. Configuration values in
|
||||
the ``[DEFAULT]`` configuration group are not used.
|
||||
|
||||
These examples show three back ends:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
enabled_backends=lvmdriver-1,lvmdriver-2,lvmdriver-3
|
||||
[lvmdriver-1]
|
||||
volume_group=cinder-volumes-1
|
||||
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_backend_name=LVM
|
||||
[lvmdriver-2]
|
||||
volume_group=cinder-volumes-2
|
||||
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_backend_name=LVM
|
||||
[lvmdriver-3]
|
||||
volume_group=cinder-volumes-3
|
||||
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_backend_name=LVM_b
|
||||
|
||||
In this configuration, ``lvmdriver-1`` and ``lvmdriver-2`` have the same
|
||||
``volume_backend_name``. If a volume creation requests the ``LVM``
|
||||
back end name, the scheduler uses the capacity filter scheduler to choose
|
||||
the most suitable driver, which is either ``lvmdriver-1`` or ``lvmdriver-2``.
|
||||
The capacity filter scheduler is enabled by default. The next section
|
||||
provides more information. In addition, this example presents a
|
||||
``lvmdriver-3`` back end.
|
||||
|
||||
.. note::
|
||||
|
||||
For Fiber Channel drivers that support multipath, the configuration group
|
||||
requires the ``use_multipath_for_image_xfer=true`` option. In
|
||||
the example below, you can see details for HPE 3PAR and EMC Fiber
|
||||
Channel drivers.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[3par]
|
||||
use_multipath_for_image_xfer = true
|
||||
volume_driver = cinder.volume.drivers.hpe.hpe_3par_fc.HPE3PARFCDriver
|
||||
volume_backend_name = 3parfc
|
||||
|
||||
[emc]
|
||||
use_multipath_for_image_xfer = true
|
||||
volume_driver = cinder.volume.drivers.emc.emc_smis_fc.EMCSMISFCDriver
|
||||
volume_backend_name = emcfc
|
||||
|
||||
Configure Block Storage scheduler multi back end
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You must enable the `filter_scheduler` option to use
|
||||
multiple-storage back ends. The filter scheduler:
|
||||
|
||||
#. Filters the available back ends. By default, ``AvailabilityZoneFilter``,
|
||||
``CapacityFilter`` and ``CapabilitiesFilter`` are enabled.
|
||||
|
||||
#. Weights the previously filtered back ends. By default, the
|
||||
`CapacityWeigher` option is enabled. When this option is
|
||||
enabled, the filter scheduler assigns the highest weight to back
|
||||
ends with the most available capacity.
|
||||
|
||||
The scheduler uses filters and weights to pick the best back end to
|
||||
handle the request. The scheduler uses volume types to explicitly create
|
||||
volumes on specific back ends. For more information about filter and weighing,
|
||||
see :ref:`filter_weigh_scheduler`.
|
||||
|
||||
|
||||
Volume type
|
||||
~~~~~~~~~~~
|
||||
|
||||
Before using it, a volume type has to be declared to Block Storage.
|
||||
This can be done by the following command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-username admin --os-tenant-name admin volume type create lvm
|
||||
|
||||
Then, an extra-specification has to be created to link the volume
|
||||
type to a back end name. Run this command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-username admin --os-tenant-name admin volume type set lvm \
|
||||
--property volume_backend_name=LVM_iSCSI
|
||||
|
||||
This example creates a ``lvm`` volume type with
|
||||
``volume_backend_name=LVM_iSCSI`` as extra-specifications.
|
||||
|
||||
Create another volume type:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-username admin --os-tenant-name admin volume type create lvm_gold
|
||||
|
||||
$ openstack --os-username admin --os-tenant-name admin volume type set lvm_gold \
|
||||
--property volume_backend_name=LVM_iSCSI_b
|
||||
|
||||
This second volume type is named ``lvm_gold`` and has ``LVM_iSCSI_b`` as
|
||||
back end name.
|
||||
|
||||
.. note::
|
||||
|
||||
To list the extra-specifications, use this command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-username admin --os-tenant-name admin volume type list --long
|
||||
|
||||
.. note::
|
||||
|
||||
If a volume type points to a ``volume_backend_name`` that does not
|
||||
exist in the Block Storage configuration, the ``filter_scheduler``
|
||||
returns an error that it cannot find a valid host with the suitable
|
||||
back end.
|
||||
|
||||
Usage
|
||||
~~~~~
|
||||
|
||||
When you create a volume, you must specify the volume type.
|
||||
The extra-specifications of the volume type are used to determine which
|
||||
back end has to be used.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume create --size 1 --type lvm test_multi_backend
|
||||
|
||||
Considering the ``cinder.conf`` described previously, the scheduler
|
||||
creates this volume on ``lvmdriver-1`` or ``lvmdriver-2``.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume create --size 1 --type lvm_gold test_multi_backend
|
||||
|
||||
This second volume is created on ``lvmdriver-3``.
|
@ -1,162 +0,0 @@
|
||||
=================================
|
||||
Configure an NFS storage back end
|
||||
=================================
|
||||
|
||||
This section explains how to configure OpenStack Block Storage to use
|
||||
NFS storage. You must be able to access the NFS shares from the server
|
||||
that hosts the ``cinder`` volume service.
|
||||
|
||||
.. note::
|
||||
|
||||
The ``cinder`` volume service is named ``openstack-cinder-volume``
|
||||
on the following distributions:
|
||||
|
||||
* CentOS
|
||||
|
||||
* Fedora
|
||||
|
||||
* openSUSE
|
||||
|
||||
* Red Hat Enterprise Linux
|
||||
|
||||
* SUSE Linux Enterprise
|
||||
|
||||
In Ubuntu and Debian distributions, the ``cinder`` volume service is
|
||||
named ``cinder-volume``.
|
||||
|
||||
**Configure Block Storage to use an NFS storage back end**
|
||||
|
||||
#. Log in as ``root`` to the system hosting the ``cinder`` volume
|
||||
service.
|
||||
|
||||
#. Create a text file named ``nfsshares`` in the ``/etc/cinder/`` directory.
|
||||
|
||||
#. Add an entry to ``/etc/cinder/nfsshares`` for each NFS share
|
||||
that the ``cinder`` volume service should use for back end storage.
|
||||
Each entry should be a separate line, and should use the following
|
||||
format:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
HOST:SHARE
|
||||
|
||||
|
||||
Where:
|
||||
|
||||
* HOST is the IP address or host name of the NFS server.
|
||||
|
||||
* SHARE is the absolute path to an existing and accessible NFS share.
|
||||
|
||||
|
|
||||
|
||||
#. Set ``/etc/cinder/nfsshares`` to be owned by the ``root`` user and
|
||||
the ``cinder`` group:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# chown root:cinder /etc/cinder/nfsshares
|
||||
|
||||
#. Set ``/etc/cinder/nfsshares`` to be readable by members of the
|
||||
cinder group:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# chmod 0640 /etc/cinder/nfsshares
|
||||
|
||||
#. Configure the ``cinder`` volume service to use the
|
||||
``/etc/cinder/nfsshares`` file created earlier. To do so, open
|
||||
the ``/etc/cinder/cinder.conf`` configuration file and set
|
||||
the ``nfs_shares_config`` configuration key
|
||||
to ``/etc/cinder/nfsshares``.
|
||||
|
||||
On distributions that include ``openstack-config``, you can configure
|
||||
this by running the following command instead:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# openstack-config --set /etc/cinder/cinder.conf \
|
||||
DEFAULT nfs_shares_config /etc/cinder/nfsshares
|
||||
|
||||
The following distributions include openstack-config:
|
||||
|
||||
* CentOS
|
||||
|
||||
* Fedora
|
||||
|
||||
* openSUSE
|
||||
|
||||
* Red Hat Enterprise Linux
|
||||
|
||||
* SUSE Linux Enterprise
|
||||
|
||||
|
||||
#. Optionally, provide any additional NFS mount options required in
|
||||
your environment in the ``nfs_mount_options`` configuration key
|
||||
of ``/etc/cinder/cinder.conf``. If your NFS shares do not
|
||||
require any additional mount options (or if you are unsure),
|
||||
skip this step.
|
||||
|
||||
On distributions that include ``openstack-config``, you can
|
||||
configure this by running the following command instead:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# openstack-config --set /etc/cinder/cinder.conf \
|
||||
DEFAULT nfs_mount_options OPTIONS
|
||||
|
||||
Replace OPTIONS with the mount options to be used when accessing
|
||||
NFS shares. See the manual page for NFS for more information on
|
||||
available mount options (:command:`man nfs`).
|
||||
|
||||
#. Configure the ``cinder`` volume service to use the correct volume
|
||||
driver, namely ``cinder.volume.drivers.nfs.NfsDriver``. To do so,
|
||||
open the ``/etc/cinder/cinder.conf`` configuration file and
|
||||
set the volume_driver configuration key
|
||||
to ``cinder.volume.drivers.nfs.NfsDriver``.
|
||||
|
||||
On distributions that include ``openstack-config``, you can configure
|
||||
this by running the following command instead:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# openstack-config --set /etc/cinder/cinder.conf \
|
||||
DEFAULT volume_driver cinder.volume.drivers.nfs.NfsDriver
|
||||
|
||||
#. You can now restart the service to apply the configuration.
|
||||
|
||||
.. note::
|
||||
|
||||
The ``nfs_sparsed_volumes`` configuration key determines whether
|
||||
volumes are created as sparse files and grown as needed or fully
|
||||
allocated up front. The default and recommended value is ``true``,
|
||||
which ensures volumes are initially created as sparse files.
|
||||
|
||||
Setting ``nfs_sparsed_volumes`` to ``false`` will result in
|
||||
volumes being fully allocated at the time of creation. This leads
|
||||
to increased delays in volume creation.
|
||||
|
||||
However, should you choose to set ``nfs_sparsed_volumes`` to
|
||||
``false``, you can do so directly in ``/etc/cinder/cinder.conf``.
|
||||
|
||||
On distributions that include ``openstack-config``, you can
|
||||
configure this by running the following command instead:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# openstack-config --set /etc/cinder/cinder.conf \
|
||||
DEFAULT nfs_sparsed_volumes false
|
||||
|
||||
.. warning::
|
||||
|
||||
If a client host has SELinux enabled, the ``virt_use_nfs``
|
||||
boolean should also be enabled if the host requires access to
|
||||
NFS volumes on an instance. To enable this boolean, run the
|
||||
following command as the ``root`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# setsebool -P virt_use_nfs on
|
||||
|
||||
This command also makes the boolean persistent across reboots.
|
||||
Run this command on all client hosts that require access to NFS
|
||||
volumes on an instance. This includes all compute nodes.
|
@ -1,140 +0,0 @@
|
||||
.. _over_subscription:
|
||||
|
||||
=====================================
|
||||
Oversubscription in thin provisioning
|
||||
=====================================
|
||||
|
||||
OpenStack Block Storage enables you to choose a volume back end based on
|
||||
virtual capacities for thin provisioning using the oversubscription ratio.
|
||||
|
||||
A reference implementation is provided for the default LVM driver. The
|
||||
illustration below uses the LVM driver as an example.
|
||||
|
||||
Configure oversubscription settings
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To support oversubscription in thin provisioning, a flag
|
||||
``max_over_subscription_ratio`` is introduced into ``cinder.conf``.
|
||||
This is a float representation of the oversubscription ratio when thin
|
||||
provisioning is involved. Default ratio is 20.0, meaning provisioned
|
||||
capacity can be 20 times of the total physical capacity. A ratio of 10.5
|
||||
means provisioned capacity can be 10.5 times of the total physical capacity.
|
||||
A ratio of 1.0 means provisioned capacity cannot exceed the total physical
|
||||
capacity. A ratio lower than 1.0 is ignored and the default value is used
|
||||
instead.
|
||||
|
||||
.. note::
|
||||
|
||||
``max_over_subscription_ratio`` can be configured for each back end when
|
||||
multiple-storage back ends are enabled. It is provided as a reference
|
||||
implementation and is used by the LVM driver. However, it is not a
|
||||
requirement for a driver to use this option from ``cinder.conf``.
|
||||
|
||||
``max_over_subscription_ratio`` is for configuring a back end. For a
|
||||
driver that supports multiple pools per back end, it can report this
|
||||
ratio for each pool. The LVM driver does not support multiple pools.
|
||||
|
||||
The existing ``reserved_percentage`` flag is used to prevent over provisioning.
|
||||
This flag represents the percentage of the back-end capacity that is reserved.
|
||||
|
||||
.. note::
|
||||
|
||||
There is a change on how ``reserved_percentage`` is used. It was measured
|
||||
against the free capacity in the past. Now it is measured against the total
|
||||
capacity.
|
||||
|
||||
Capabilities
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Drivers can report the following capabilities for a back end or a pool:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
thin_provisioning_support = True(or False)
|
||||
thick_provisioning_support = True(or False)
|
||||
provisioned_capacity_gb = PROVISIONED_CAPACITY
|
||||
max_over_subscription_ratio = MAX_RATIO
|
||||
|
||||
Where ``PROVISIONED_CAPACITY`` is the apparent allocated space indicating
|
||||
how much capacity has been provisioned and ``MAX_RATIO`` is the maximum
|
||||
oversubscription ratio. For the LVM driver, it is
|
||||
``max_over_subscription_ratio`` in ``cinder.conf``.
|
||||
|
||||
Two capabilities are added here to allow a back end or pool to claim support
|
||||
for thin provisioning, or thick provisioning, or both.
|
||||
|
||||
The LVM driver reports ``thin_provisioning_support=True`` and
|
||||
``thick_provisioning_support=False`` if the ``lvm_type`` flag in
|
||||
``cinder.conf`` is ``thin``. Otherwise it reports
|
||||
``thin_provisioning_support=False`` and ``thick_provisioning_support=True``.
|
||||
|
||||
Volume type extra specs
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If volume type is provided as part of the volume creation request, it can
|
||||
have the following extra specs defined:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
'capabilities:thin_provisioning_support': '<is> True' or '<is> False'
|
||||
'capabilities:thick_provisioning_support': '<is> True' or '<is> False'
|
||||
|
||||
.. note::
|
||||
|
||||
``capabilities`` scope key before ``thin_provisioning_support`` and
|
||||
``thick_provisioning_support`` is not required. So the following works too:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
'thin_provisioning_support': '<is> True' or '<is> False'
|
||||
'thick_provisioning_support': '<is> True' or '<is> False'
|
||||
|
||||
The above extra specs are used by the scheduler to find a back end that
|
||||
supports thin provisioning, thick provisioning, or both to match the needs
|
||||
of a specific volume type.
|
||||
|
||||
Volume replication extra specs
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
OpenStack Block Storage has the ability to create volume replicas.
|
||||
Administrators can define a storage policy that includes
|
||||
replication by adjusting the cinder volume driver. Volume replication
|
||||
for OpenStack Block Storage helps safeguard OpenStack environments from
|
||||
data loss during disaster recovery.
|
||||
|
||||
To enable replication when creating volume types, configure the cinder
|
||||
volume with ``capabilities:replication="<is> True"``.
|
||||
|
||||
Each volume created with the replication capability set to ``True``
|
||||
generates a copy of the volume on a storage back end.
|
||||
|
||||
One use case for replication involves an OpenStack cloud environment
|
||||
installed across two data centers located nearby each other. The
|
||||
distance between the two data centers in this use case is the length of
|
||||
a city.
|
||||
|
||||
At each data center, a cinder host supports the Block Storage service.
|
||||
Both data centers include storage back ends.
|
||||
|
||||
Depending on the storage requirements, there can be one or two cinder
|
||||
hosts. The administrator accesses the
|
||||
``/etc/cinder/cinder.conf`` configuration file and sets
|
||||
``capabilities:replication="<is> True"``.
|
||||
|
||||
If one data center experiences a service failure, administrators
|
||||
can redeploy the VM. The VM will run using a replicated, backed up
|
||||
volume on a host in the second data center.
|
||||
|
||||
Capacity filter
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
In the capacity filter, ``max_over_subscription_ratio`` is used when
|
||||
choosing a back end if ``thin_provisioning_support`` is True and
|
||||
``max_over_subscription_ratio`` is greater than 1.0.
|
||||
|
||||
Capacity weigher
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
In the capacity weigher, virtual free capacity is used for ranking if
|
||||
``thin_provisioning_support`` is True. Otherwise, real free capacity
|
||||
will be used as before.
|
@ -1,46 +0,0 @@
|
||||
.. _ratelimit_volume_copy_bandwidth:
|
||||
|
||||
================================
|
||||
Rate-limit volume copy bandwidth
|
||||
================================
|
||||
|
||||
When you create a new volume from an image or an existing volume, or
|
||||
when you upload a volume image to the Image service, large data copy
|
||||
may stress disk and network bandwidth. To mitigate slow down of data
|
||||
access from the instances, OpenStack Block Storage supports rate-limiting
|
||||
of volume data copy bandwidth.
|
||||
|
||||
Configure volume copy bandwidth limit
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To configure the volume copy bandwidth limit, set the
|
||||
``volume_copy_bps_limit`` option in the configuration groups for each
|
||||
back end in the ``cinder.conf`` file. This option takes the integer of
|
||||
maximum bandwidth allowed for volume data copy in byte per second. If
|
||||
this option is set to ``0``, the rate-limit is disabled.
|
||||
|
||||
While multiple volume data copy operations are running in the same back
|
||||
end, the specified bandwidth is divided to each copy.
|
||||
|
||||
Example ``cinder.conf`` configuration file to limit volume copy bandwidth
|
||||
of ``lvmdriver-1`` up to 100 MiB/s:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[lvmdriver-1]
|
||||
volume_group=cinder-volumes-1
|
||||
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_backend_name=LVM
|
||||
volume_copy_bps_limit=104857600
|
||||
|
||||
.. note::
|
||||
|
||||
This feature requires libcgroup to set up blkio cgroup for disk I/O
|
||||
bandwidth limit. The libcgroup is provided by the cgroup-bin package
|
||||
in Debian and Ubuntu, or by the libcgroup-tools package in Fedora,
|
||||
Red Hat Enterprise Linux, CentOS, openSUSE, and SUSE Linux Enterprise.
|
||||
|
||||
.. note::
|
||||
|
||||
Some back ends which use remote file systems such as NFS are not
|
||||
supported by this feature.
|
@ -1,22 +0,0 @@
|
||||
==============================
|
||||
Troubleshoot your installation
|
||||
==============================
|
||||
|
||||
This section provides useful tips to help you troubleshoot your Block
|
||||
Storage installation.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
ts-cinder-config.rst
|
||||
ts-multipath-warn.rst
|
||||
ts-eql-volume-size.rst
|
||||
ts-vol-attach-miss-sg-scan.rst
|
||||
ts-HTTP-bad-req-in-cinder-vol-log.rst
|
||||
ts-duplicate-3par-host.rst
|
||||
ts-failed-attach-vol-after-detach.rst
|
||||
ts-failed-attach-vol-no-sysfsutils.rst
|
||||
ts-failed-connect-vol-FC-SAN.rst
|
||||
ts-no-emulator-x86-64.rst
|
||||
ts-non-existent-host.rst
|
||||
ts-non-existent-vlun.rst
|
@ -1,90 +0,0 @@
|
||||
.. _volume_backed_image:
|
||||
|
||||
|
||||
===================
|
||||
Volume-backed image
|
||||
===================
|
||||
|
||||
OpenStack Block Storage can quickly create a volume from an image that refers
|
||||
to a volume storing image data (Image-Volume). Compared to the other stores
|
||||
such as file and swift, creating a volume from a Volume-backed image performs
|
||||
better when the block storage driver supports efficient volume cloning.
|
||||
|
||||
If the image is set to public in the Image service, the volume data can be
|
||||
shared among projects.
|
||||
|
||||
Configure the Volume-backed image
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Volume-backed image feature requires locations information from the cinder
|
||||
store of the Image service. To enable the Image service to use the cinder
|
||||
store, add ``cinder`` to the ``stores`` option in the ``glance_store`` section
|
||||
of the ``glance-api.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
stores = file, http, swift, cinder
|
||||
|
||||
To expose locations information, set the following options in the ``DEFAULT``
|
||||
section of the ``glance-api.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
show_multiple_locations = True
|
||||
|
||||
To enable the Block Storage services to create a new volume by cloning Image-
|
||||
Volume, set the following options in the ``DEFAULT`` section of the
|
||||
``cinder.conf`` file. For example:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
glance_api_version = 2
|
||||
allowed_direct_url_schemes = cinder
|
||||
|
||||
To enable the :command:`openstack image create --volume <volume>` command to
|
||||
create an image that refers an ``Image-Volume``, set the following options in
|
||||
each back-end section of the ``cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
image_upload_use_cinder_backend = True
|
||||
|
||||
By default, the :command:`openstack image create --volume <volume>` command
|
||||
creates the Image-Volume in the current project. To store the Image-Volume into
|
||||
the internal project, set the following options in each back-end section of the
|
||||
``cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
image_upload_use_internal_tenant = True
|
||||
|
||||
To make the Image-Volume in the internal project accessible from the Image
|
||||
service, set the following options in the ``glance_store`` section of
|
||||
the ``glance-api.conf`` file:
|
||||
|
||||
- ``cinder_store_auth_address``
|
||||
- ``cinder_store_user_name``
|
||||
- ``cinder_store_password``
|
||||
- ``cinder_store_project_name``
|
||||
|
||||
Creating a Volume-backed image
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To register an existing volume as a new Volume-backed image, use the following
|
||||
commands:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack image create --disk-format raw --container-format bare IMAGE_NAME
|
||||
|
||||
$ glance location-add <image-uuid> --url cinder://<volume-uuid>
|
||||
|
||||
If the ``image_upload_use_cinder_backend`` option is enabled, the following
|
||||
command creates a new Image-Volume by cloning the specified volume and then
|
||||
registers its location to a new image. The disk format and the container format
|
||||
must be raw and bare (default). Otherwise, the image is uploaded to the default
|
||||
store of the Image service.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack image create --volume SOURCE_VOLUME IMAGE_NAME
|
@ -1,58 +0,0 @@
|
||||
.. _volume_backups_export_import:
|
||||
|
||||
=================================
|
||||
Export and import backup metadata
|
||||
=================================
|
||||
|
||||
|
||||
A volume backup can only be restored on the same Block Storage service. This
|
||||
is because restoring a volume from a backup requires metadata available on
|
||||
the database used by the Block Storage service.
|
||||
|
||||
.. note::
|
||||
|
||||
For information about how to back up and restore a volume, see
|
||||
the section called :ref:`volume_backups`.
|
||||
|
||||
You can, however, export the metadata of a volume backup. To do so, run
|
||||
this command as an OpenStack ``admin`` user (presumably, after creating
|
||||
a volume backup):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder backup-export BACKUP_ID
|
||||
|
||||
Where ``BACKUP_ID`` is the volume backup's ID. This command should return the
|
||||
backup's corresponding database information as encoded string metadata.
|
||||
|
||||
Exporting and storing this encoded string metadata allows you to completely
|
||||
restore the backup, even in the event of a catastrophic database failure.
|
||||
This will preclude the need to back up the entire Block Storage database,
|
||||
particularly if you only need to keep complete backups of a small subset
|
||||
of volumes.
|
||||
|
||||
If you have placed encryption on your volumes, the encryption will still be
|
||||
in place when you restore the volume if a UUID encryption key is specified
|
||||
when creating volumes. Using backup metadata support, UUID keys set up for
|
||||
a volume (or volumes) will remain valid when you restore a backed-up volume.
|
||||
The restored volume will remain encrypted, and will be accessible with your
|
||||
credentials.
|
||||
|
||||
In addition, having a volume backup and its backup metadata also provides
|
||||
volume portability. Specifically, backing up a volume and exporting its
|
||||
metadata will allow you to restore the volume on a completely different Block
|
||||
Storage database, or even on a different cloud service. To do so, first
|
||||
import the backup metadata to the Block Storage database and then restore
|
||||
the backup.
|
||||
|
||||
To import backup metadata, run the following command as an OpenStack
|
||||
``admin``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder backup-import METADATA
|
||||
|
||||
Where ``METADATA`` is the backup metadata exported earlier.
|
||||
|
||||
Once you have imported the backup metadata into a Block Storage database,
|
||||
restore the volume (see the section called :ref:`volume_backups`).
|
@ -1,175 +0,0 @@
|
||||
.. _volume_backups:
|
||||
|
||||
=========================================
|
||||
Back up and restore volumes and snapshots
|
||||
=========================================
|
||||
|
||||
The ``openstack`` command-line interface provides the tools for creating a
|
||||
volume backup. You can restore a volume from a backup as long as the
|
||||
backup's associated database information (or backup metadata) is intact
|
||||
in the Block Storage database.
|
||||
|
||||
Run this command to create a backup of a volume:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume backup create [--incremental] [--force] VOLUME
|
||||
|
||||
Where ``VOLUME`` is the name or ID of the volume, ``incremental`` is
|
||||
a flag that indicates whether an incremental backup should be performed,
|
||||
and ``force`` is a flag that allows or disallows backup of a volume
|
||||
when the volume is attached to an instance.
|
||||
|
||||
Without the ``incremental`` flag, a full backup is created by default.
|
||||
With the ``incremental`` flag, an incremental backup is created.
|
||||
|
||||
Without the ``force`` flag, the volume will be backed up only if its
|
||||
status is ``available``. With the ``force`` flag, the volume will be
|
||||
backed up whether its status is ``available`` or ``in-use``. A volume
|
||||
is ``in-use`` when it is attached to an instance. The backup of an
|
||||
``in-use`` volume means your data is crash consistent. The ``force``
|
||||
flag is False by default.
|
||||
|
||||
.. note::
|
||||
|
||||
The ``incremental`` and ``force`` flags are only available for block
|
||||
storage API v2. You have to specify ``[--os-volume-api-version 2]`` in the
|
||||
``cinder`` command-line interface to use this parameter.
|
||||
|
||||
.. note::
|
||||
|
||||
The ``force`` flag is new in OpenStack Liberty.
|
||||
|
||||
The incremental backup is based on a parent backup which is an existing
|
||||
backup with the latest timestamp. The parent backup can be a full backup
|
||||
or an incremental backup depending on the timestamp.
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
The first backup of a volume has to be a full backup. Attempting to do
|
||||
an incremental backup without any existing backups will fail.
|
||||
There is an ``is_incremental`` flag that indicates whether a backup is
|
||||
incremental when showing details on the backup.
|
||||
Another flag, ``has_dependent_backups``, returned when showing backup
|
||||
details, will indicate whether the backup has dependent backups.
|
||||
If it is ``true``, attempting to delete this backup will fail.
|
||||
|
||||
A new configure option ``backup_swift_block_size`` is introduced into
|
||||
``cinder.conf`` for the default Swift backup driver. This is the size in
|
||||
bytes that changes are tracked for incremental backups. The existing
|
||||
``backup_swift_object_size`` option, the size in bytes of Swift backup
|
||||
objects, has to be a multiple of ``backup_swift_block_size``. The default
|
||||
is 32768 for ``backup_swift_block_size``, and the default is 52428800 for
|
||||
``backup_swift_object_size``.
|
||||
|
||||
The configuration option ``backup_swift_enable_progress_timer`` in
|
||||
``cinder.conf`` is used when backing up the volume to Object Storage
|
||||
back end. This option enables or disables the timer. It is enabled by default
|
||||
to send the periodic progress notifications to the Telemetry service.
|
||||
|
||||
This command also returns a backup ID. Use this backup ID when restoring
|
||||
the volume:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume backup restore BACKUP_ID VOLUME_ID
|
||||
|
||||
When restoring from a full backup, it is a full restore.
|
||||
|
||||
When restoring from an incremental backup, a list of backups is built based
|
||||
on the IDs of the parent backups. A full restore is performed based on the
|
||||
full backup first, then restore is done based on the incremental backup,
|
||||
laying on top of it in order.
|
||||
|
||||
You can view a backup list with the :command:`openstack volume backup list`
|
||||
command. Optional arguments to clarify the status of your backups
|
||||
include: running ``--name``, ``--status``, and
|
||||
``--volume`` to filter through backups by the specified name,
|
||||
status, or volume-id. Search with ``--all-projects`` for details of the
|
||||
projects associated with the listed backups.
|
||||
|
||||
Because volume backups are dependent on the Block Storage database, you must
|
||||
also back up your Block Storage database regularly to ensure data recovery.
|
||||
|
||||
.. note::
|
||||
|
||||
Alternatively, you can export and save the metadata of selected volume
|
||||
backups. Doing so precludes the need to back up the entire Block Storage
|
||||
database. This is useful if you need only a small subset of volumes to
|
||||
survive a catastrophic database failure.
|
||||
|
||||
If you specify a UUID encryption key when setting up the volume
|
||||
specifications, the backup metadata ensures that the key will remain valid
|
||||
when you back up and restore the volume.
|
||||
|
||||
For more information about how to export and import volume backup metadata,
|
||||
see the section called :ref:`volume_backups_export_import`.
|
||||
|
||||
By default, the swift object store is used for the backup repository.
|
||||
|
||||
If instead you want to use an NFS export as the backup repository, add the
|
||||
following configuration options to the ``[DEFAULT]`` section of the
|
||||
``cinder.conf`` file and restart the Block Storage services:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
backup_driver = cinder.backup.drivers.nfs
|
||||
backup_share = HOST:EXPORT_PATH
|
||||
|
||||
For the ``backup_share`` option, replace ``HOST`` with the DNS resolvable
|
||||
host name or the IP address of the storage server for the NFS share, and
|
||||
``EXPORT_PATH`` with the path to that share. If your environment requires
|
||||
that non-default mount options be specified for the share, set these as
|
||||
follows:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
backup_mount_options = MOUNT_OPTIONS
|
||||
|
||||
``MOUNT_OPTIONS`` is a comma-separated string of NFS mount options as detailed
|
||||
in the NFS man page.
|
||||
|
||||
There are several other options whose default values may be overridden as
|
||||
appropriate for your environment:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
backup_compression_algorithm = zlib
|
||||
backup_sha_block_size_bytes = 32768
|
||||
backup_file_size = 1999994880
|
||||
|
||||
The option ``backup_compression_algorithm`` can be set to ``bz2`` or ``None``.
|
||||
The latter can be a useful setting when the server providing the share for the
|
||||
backup repository itself performs deduplication or compression on the backup
|
||||
data.
|
||||
|
||||
The option ``backup_file_size`` must be a multiple of
|
||||
``backup_sha_block_size_bytes``. It is effectively the maximum file size to be
|
||||
used, given your environment, to hold backup data. Volumes larger than this
|
||||
will be stored in multiple files in the backup repository. The
|
||||
``backup_sha_block_size_bytes`` option determines the size of blocks from the
|
||||
cinder volume being backed up on which digital signatures are calculated in
|
||||
order to enable incremental backup capability.
|
||||
|
||||
You also have the option of resetting the state of a backup. When creating or
|
||||
restoring a backup, sometimes it may get stuck in the creating or restoring
|
||||
states due to problems like the database or rabbitmq being down. In situations
|
||||
like these resetting the state of the backup can restore it to a functional
|
||||
status.
|
||||
|
||||
Run this command to restore the state of a backup:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder backup-reset-state [--state STATE] BACKUP_ID-1 BACKUP_ID-2 ...
|
||||
|
||||
Run this command to create a backup of a snapshot:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume backup create [--incremental] [--force] \
|
||||
[--snapshot SNAPSHOT_ID] VOLUME
|
||||
|
||||
Where ``VOLUME`` is the name or ID of the volume, ``SNAPSHOT_ID`` is the ID of
|
||||
the volume's snapshot.
|
@ -1,208 +0,0 @@
|
||||
.. _volume_migration.rst:
|
||||
|
||||
===============
|
||||
Migrate volumes
|
||||
===============
|
||||
|
||||
OpenStack has the ability to migrate volumes between back ends which support
|
||||
its volume-type. Migrating a volume transparently moves its data from the
|
||||
current back end for the volume to a new one. This is an administrator
|
||||
function, and can be used for functions including storage evacuation (for
|
||||
maintenance or decommissioning), or manual optimizations (for example,
|
||||
performance, reliability, or cost).
|
||||
|
||||
These workflows are possible for a migration:
|
||||
|
||||
#. If the storage can migrate the volume on its own, it is given the
|
||||
opportunity to do so. This allows the Block Storage driver to enable
|
||||
optimizations that the storage might be able to perform. If the back end
|
||||
is not able to perform the migration, the Block Storage uses one of two
|
||||
generic flows, as follows.
|
||||
|
||||
#. If the volume is not attached, the Block Storage service creates a volume
|
||||
and copies the data from the original to the new volume.
|
||||
|
||||
.. note::
|
||||
|
||||
While most back ends support this function, not all do. See the `driver
|
||||
documentation <https://docs.openstack.org/ocata/config-reference/block-storage/volume-drivers.html>`__
|
||||
in the OpenStack Configuration Reference for more details.
|
||||
|
||||
#. If the volume is attached to a VM instance, the Block Storage creates a
|
||||
volume, and calls Compute to copy the data from the original to the new
|
||||
volume. Currently this is supported only by the Compute libvirt driver.
|
||||
|
||||
As an example, this scenario shows two LVM back ends and migrates an attached
|
||||
volume from one to the other. This scenario uses the third migration flow.
|
||||
|
||||
First, list the available back ends:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# cinder get-pools
|
||||
+----------+----------------------------------------------------+
|
||||
| Property | Value |
|
||||
+----------+----------------------------------------------------+
|
||||
| name | server1@lvmstorage-1#lvmstorage-1 |
|
||||
+----------+----------------------------------------------------+
|
||||
+----------+----------------------------------------------------+
|
||||
| Property | Value |
|
||||
+----------+----------------------------------------------------+
|
||||
| name | server2@lvmstorage-2#lvmstorage-2 |
|
||||
+----------+----------------------------------------------------+
|
||||
|
||||
.. note::
|
||||
|
||||
Only Block Storage V2 API supports :command:`cinder get-pools`.
|
||||
|
||||
You can also get available back ends like following:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# cinder-manage host list
|
||||
server1@lvmstorage-1 zone1
|
||||
server2@lvmstorage-2 zone1
|
||||
|
||||
But it needs to add pool name in the end. For example,
|
||||
``server1@lvmstorage-1#zone1``.
|
||||
|
||||
Next, as the admin user, you can see the current status of the volume
|
||||
(replace the example ID with your own):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume show 6088f80a-f116-4331-ad48-9afb0dfb196c
|
||||
|
||||
+--------------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------------------------+--------------------------------------+
|
||||
| attachments | [] |
|
||||
| availability_zone | zone1 |
|
||||
| bootable | false |
|
||||
| consistencygroup_id | None |
|
||||
| created_at | 2013-09-01T14:53:22.000000 |
|
||||
| description | test |
|
||||
| encrypted | False |
|
||||
| id | 6088f80a-f116-4331-ad48-9afb0dfb196c |
|
||||
| migration_status | None |
|
||||
| multiattach | False |
|
||||
| name | test |
|
||||
| os-vol-host-attr:host | server1@lvmstorage-1#lvmstorage-1 |
|
||||
| os-vol-mig-status-attr:migstat | None |
|
||||
| os-vol-mig-status-attr:name_id | None |
|
||||
| os-vol-tenant-attr:tenant_id | d88310717a8e4ebcae84ed075f82c51e |
|
||||
| properties | readonly='False' |
|
||||
| replication_status | disabled |
|
||||
| size | 1 |
|
||||
| snapshot_id | None |
|
||||
| source_volid | None |
|
||||
| status | in-use |
|
||||
| type | None |
|
||||
| updated_at | 2016-07-31T07:22:19.000000 |
|
||||
| user_id | d8e5e5727f3a4ce1886ac8ecec058e83 |
|
||||
+--------------------------------+--------------------------------------+
|
||||
|
||||
Note these attributes:
|
||||
|
||||
* ``os-vol-host-attr:host`` - the volume's current back end.
|
||||
* ``os-vol-mig-status-attr:migstat`` - the status of this volume's migration
|
||||
(None means that a migration is not currently in progress).
|
||||
* ``os-vol-mig-status-attr:name_id`` - the volume ID that this volume's name
|
||||
on the back end is based on. Before a volume is ever migrated, its name on
|
||||
the back end storage may be based on the volume's ID (see the
|
||||
``volume_name_template`` configuration parameter). For example, if
|
||||
``volume_name_template`` is kept as the default value (``volume-%s``), your
|
||||
first LVM back end has a logical volume named
|
||||
``volume-6088f80a-f116-4331-ad48-9afb0dfb196c``. During the course of a
|
||||
migration, if you create a volume and copy over the data, the volume get
|
||||
the new name but keeps its original ID. This is exposed by the ``name_id``
|
||||
attribute.
|
||||
|
||||
.. note::
|
||||
|
||||
If you plan to decommission a block storage node, you must stop the
|
||||
``cinder`` volume service on the node after performing the migration.
|
||||
|
||||
On nodes that run CentOS, Fedora, openSUSE, Red Hat Enterprise Linux,
|
||||
or SUSE Linux Enterprise, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service openstack-cinder-volume stop
|
||||
# chkconfig openstack-cinder-volume off
|
||||
|
||||
On nodes that run Ubuntu or Debian, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service cinder-volume stop
|
||||
# chkconfig cinder-volume off
|
||||
|
||||
Stopping the cinder volume service will prevent volumes from being
|
||||
allocated to the node.
|
||||
|
||||
Migrate this volume to the second LVM back end:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder migrate 6088f80a-f116-4331-ad48-9afb0dfb196c \
|
||||
server2@lvmstorage-2#lvmstorage-2
|
||||
|
||||
Request to migrate volume 6088f80a-f116-4331-ad48-9afb0dfb196c has been
|
||||
accepted.
|
||||
|
||||
You can use the :command:`openstack volume show` command to see the status of
|
||||
the migration. While migrating, the ``migstat`` attribute shows states such as
|
||||
``migrating`` or ``completing``. On error, ``migstat`` is set to None and the
|
||||
host attribute shows the original ``host``. On success, in this example, the
|
||||
output looks like:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume show 6088f80a-f116-4331-ad48-9afb0dfb196c
|
||||
|
||||
+--------------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------------------------+--------------------------------------+
|
||||
| attachments | [] |
|
||||
| availability_zone | zone1 |
|
||||
| bootable | false |
|
||||
| consistencygroup_id | None |
|
||||
| created_at | 2013-09-01T14:53:22.000000 |
|
||||
| description | test |
|
||||
| encrypted | False |
|
||||
| id | 6088f80a-f116-4331-ad48-9afb0dfb196c |
|
||||
| migration_status | None |
|
||||
| multiattach | False |
|
||||
| name | test |
|
||||
| os-vol-host-attr:host | server2@lvmstorage-2#lvmstorage-2 |
|
||||
| os-vol-mig-status-attr:migstat | completing |
|
||||
| os-vol-mig-status-attr:name_id | None |
|
||||
| os-vol-tenant-attr:tenant_id | d88310717a8e4ebcae84ed075f82c51e |
|
||||
| properties | readonly='False' |
|
||||
| replication_status | disabled |
|
||||
| size | 1 |
|
||||
| snapshot_id | None |
|
||||
| source_volid | None |
|
||||
| status | in-use |
|
||||
| type | None |
|
||||
| updated_at | 2017-02-22T02:35:03.000000 |
|
||||
| user_id | d8e5e5727f3a4ce1886ac8ecec058e83 |
|
||||
+--------------------------------+--------------------------------------+
|
||||
|
||||
Note that ``migstat`` is None, host is the new host, and ``name_id`` holds the
|
||||
ID of the volume created by the migration. If you look at the second LVM back
|
||||
end, you find the logical volume
|
||||
``volume-133d1f56-9ffc-4f57-8798-d5217d851862``.
|
||||
|
||||
.. note::
|
||||
|
||||
The migration is not visible to non-admin users (for example, through the
|
||||
volume ``status``). However, some operations are not allowed while a
|
||||
migration is taking place, such as attaching/detaching a volume and
|
||||
deleting a volume. If a user performs such an action during a migration,
|
||||
an error is returned.
|
||||
|
||||
.. note::
|
||||
|
||||
Migrating volumes that have snapshots are currently not allowed.
|
@ -1,88 +0,0 @@
|
||||
.. _volume_number_weigher:
|
||||
|
||||
=======================================
|
||||
Configure and use volume number weigher
|
||||
=======================================
|
||||
|
||||
OpenStack Block Storage enables you to choose a volume back end according
|
||||
to ``free_capacity`` and ``allocated_capacity``. The volume number weigher
|
||||
feature lets the scheduler choose a volume back end based on its volume
|
||||
number in the volume back end. This can provide another means to improve
|
||||
the volume back ends' I/O balance and the volumes' I/O performance.
|
||||
|
||||
Enable volume number weigher
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To enable a volume number weigher, set the
|
||||
``scheduler_default_weighers`` to ``VolumeNumberWeigher`` flag in the
|
||||
``cinder.conf`` file to define ``VolumeNumberWeigher``
|
||||
as the selected weigher.
|
||||
|
||||
Configure multiple-storage back ends
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To configure ``VolumeNumberWeigher``, use ``LVMVolumeDriver``
|
||||
as the volume driver.
|
||||
|
||||
This configuration defines two LVM volume groups: ``stack-volumes`` with
|
||||
10 GB capacity and ``stack-volumes-1`` with 60 GB capacity.
|
||||
This example configuration defines two back ends:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
scheduler_default_weighers=VolumeNumberWeigher
|
||||
enabled_backends=lvmdriver-1,lvmdriver-2
|
||||
[lvmdriver-1]
|
||||
volume_group=stack-volumes
|
||||
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_backend_name=LVM
|
||||
|
||||
[lvmdriver-2]
|
||||
volume_group=stack-volumes-1
|
||||
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_backend_name=LVM
|
||||
|
||||
Volume type
|
||||
~~~~~~~~~~~
|
||||
|
||||
Define a volume type in Block Storage:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create lvm
|
||||
|
||||
Create an extra specification that links the volume type to a back-end name:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type set lvm --property volume_backend_name=LVM
|
||||
|
||||
This example creates a lvm volume type with
|
||||
``volume_backend_name=LVM`` as extra specifications.
|
||||
|
||||
Usage
|
||||
~~~~~
|
||||
|
||||
To create six 1-GB volumes, run the
|
||||
:command:`openstack volume create --size 1 --type lvm volume1` command
|
||||
six times:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume create --size 1 --type lvm volume1
|
||||
|
||||
This command creates three volumes in ``stack-volumes`` and
|
||||
three volumes in ``stack-volumes-1``.
|
||||
|
||||
List the available volumes:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# lvs
|
||||
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
|
||||
volume-3814f055-5294-4796-b5e6-1b7816806e5d stack-volumes -wi-a---- 1.00g
|
||||
volume-72cf5e79-99d2-4d23-b84e-1c35d3a293be stack-volumes -wi-a---- 1.00g
|
||||
volume-96832554-0273-4e9d-902b-ad421dfb39d1 stack-volumes -wi-a---- 1.00g
|
||||
volume-169386ef-3d3e-4a90-8439-58ceb46889d9 stack-volumes-1 -wi-a---- 1.00g
|
||||
volume-460b0bbb-d8a0-4bc3-9882-a129a5fe8652 stack-volumes-1 -wi-a---- 1.00g
|
||||
volume-9a08413b-0dbc-47c9-afb8-41032ab05a41 stack-volumes-1 -wi-a---- 1.00g
|
@ -1,32 +0,0 @@
|
||||
.. _block_storage:
|
||||
|
||||
=============
|
||||
Block Storage
|
||||
=============
|
||||
|
||||
The OpenStack Block Storage service works through the interaction of
|
||||
a series of daemon processes named ``cinder-*`` that reside
|
||||
persistently on the host machine or machines. You can run all the
|
||||
binaries from a single node, or spread across multiple nodes. You can
|
||||
also run them on the same node as other OpenStack services.
|
||||
|
||||
To administer the OpenStack Block Storage service, it is helpful to
|
||||
understand a number of concepts. You must make certain choices when
|
||||
you configure the Block Storage service in OpenStack. The bulk of the
|
||||
options come down to two choices - single node or multi-node install.
|
||||
You can read a longer discussion about `Storage Decisions`_ in the
|
||||
`OpenStack Operations Guide`_.
|
||||
|
||||
OpenStack Block Storage enables you to add extra block-level storage
|
||||
to your OpenStack Compute instances. This service is similar to the
|
||||
Amazon EC2 Elastic Block Storage (EBS) offering.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
blockstorage-api-throughput.rst
|
||||
blockstorage-manage-volumes.rst
|
||||
blockstorage-troubleshoot.rst
|
||||
|
||||
.. _`Storage Decisions`: https://docs.openstack.org/ops-guide/arch-storage.html
|
||||
.. _`OpenStack Operations Guide`: https://docs.openstack.org/ops-guide/
|
@ -1,16 +0,0 @@
|
||||
================================
|
||||
Manage the OpenStack environment
|
||||
================================
|
||||
|
||||
This section includes tasks specific to the OpenStack environment.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
cli-nova-specify-host.rst
|
||||
cli-nova-numa-libvirt.rst
|
||||
cli-nova-evacuate.rst
|
||||
cli-os-migrate.rst
|
||||
cli-os-migrate-cfg-ssh.rst
|
||||
cli-admin-manage-ip-addresses.rst
|
||||
cli-admin-manage-stacks.rst
|
@ -1,89 +0,0 @@
|
||||
===================
|
||||
Manage IP addresses
|
||||
===================
|
||||
|
||||
Each instance has a private, fixed IP address that is assigned when
|
||||
the instance is launched. In addition, an instance can have a public
|
||||
or floating IP address. Private IP addresses are used for
|
||||
communication between instances, and public IP addresses are used
|
||||
for communication with networks outside the cloud, including the
|
||||
Internet.
|
||||
|
||||
.. note::
|
||||
|
||||
When creating and updating a floating IP, only consider IPv4 addresses
|
||||
on both the floating IP port and the internal port the floating IP is
|
||||
associated with. Additionally, disallow creating floating IPs on networks
|
||||
without any IPv4 subnets, since these floating IPs could not be allocated
|
||||
an IPv6 address.
|
||||
|
||||
- By default, both administrative and end users can associate floating IP
|
||||
addresses with projects and instances. You can change user permissions for
|
||||
managing IP addresses by updating the ``/etc/nova/policy.json``
|
||||
file. For basic floating-IP procedures, refer to the `Allocate a
|
||||
floating address to an instance <https://docs.openstack.org/user-guide/configure-access-and-security-for-instances.html#allocate-a-floating-ip-address-to-an-instance>`_
|
||||
section in the OpenStack End User Guide.
|
||||
|
||||
- For details on creating public networks using OpenStack Networking
|
||||
(``neutron``), refer to :ref:`networking-adv-features`.
|
||||
No floating IP addresses are created by default in OpenStack Networking.
|
||||
|
||||
List addresses for all projects
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To list all floating IP addresses for all projects, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack floating ip list
|
||||
+--------------------------------------+---------------------+------------------+------+
|
||||
| ID | Floating IP Address | Fixed IP Address | Port |
|
||||
+--------------------------------------+---------------------+------------------+------+
|
||||
| 89532684-13e1-4af3-bd79-f434c9920cc3 | 172.24.4.235 | None | None |
|
||||
| c70ad74b-2f64-4e60-965e-f24fc12b3194 | 172.24.4.236 | None | None |
|
||||
| ea3ebc6d-a146-47cd-aaa8-35f06e1e8c3d | 172.24.4.229 | None | None |
|
||||
+--------------------------------------+---------------------+------------------+------+
|
||||
|
||||
Create floating IP addresses
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To create a floating IP addresses, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack floating ip create --fixed-ip-address <fixed-ip-address> <network>
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack floating ip create --fixed-ip-address 192.168.1.56 NETWORK
|
||||
|
||||
.. note::
|
||||
|
||||
You should use a free IP addresses that is valid for your network.
|
||||
If you are not sure, at least try to avoid the DHCP address range:
|
||||
|
||||
- Pick a small range (/29 gives an 8 address range, 6 of
|
||||
which will be usable).
|
||||
|
||||
- Use :command:`nmap` to check a range's availability. For example,
|
||||
192.168.1.56/29 represents a small range of addresses
|
||||
(192.168.1.56-63, with 57-62 usable), and you could run the
|
||||
command :command:`nmap -sn 192.168.1.56/29` to check whether the entire
|
||||
range is currently unused.
|
||||
|
||||
Delete floating IP addresses
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To delete a floating IP address, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack floating ip delete FLOATING_IP
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack floating ip delete 192.168.1.56
|
@ -1,41 +0,0 @@
|
||||
======================================
|
||||
Launch and manage stacks using the CLI
|
||||
======================================
|
||||
|
||||
The Orchestration service provides a template-based
|
||||
orchestration engine. Administrators can use the orchestration engine
|
||||
to create and manage OpenStack cloud infrastructure resources. For
|
||||
example, an administrator can define storage, networking, instances,
|
||||
and applications to use as a repeatable running environment.
|
||||
|
||||
Templates are used to create stacks, which are collections
|
||||
of resources. For example, a stack might include instances,
|
||||
floating IPs, volumes, security groups, or users.
|
||||
The Orchestration service offers access to all OpenStack
|
||||
core services through a single modular template, with additional
|
||||
orchestration capabilities such as auto-scaling and basic
|
||||
high availability.
|
||||
|
||||
For information about:
|
||||
|
||||
- basic creation and deletion of Orchestration stacks, refer
|
||||
to the `OpenStack End User Guide
|
||||
<https://docs.openstack.org/user-guide/dashboard-stacks.html>`_
|
||||
|
||||
- **openstack** CLI, see the `OpenStackClient documentation
|
||||
<https://docs.openstack.org/developer/python-openstackclient/>`_
|
||||
|
||||
.. note::
|
||||
|
||||
The ``heat`` CLI is deprecated in favor of ``python-openstackclient``.
|
||||
For a Python library, continue using ``python-heatclient``.
|
||||
|
||||
As an administrator, you can also carry out stack functions
|
||||
on behalf of your users. For example, to resume, suspend,
|
||||
or delete a stack, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack stack resume STACK
|
||||
$ openstack stack suspend STACK
|
||||
$ openstack stack delete STACK
|
@ -1,210 +0,0 @@
|
||||
=================
|
||||
Analyze log files
|
||||
=================
|
||||
|
||||
Use the swift command-line client for Object Storage to analyze log files.
|
||||
|
||||
The swift client is simple to use, scalable, and flexible.
|
||||
|
||||
Use the swift client ``-o`` or ``-output`` option to get
|
||||
short answers to questions about logs.
|
||||
|
||||
You can use the ``-o`` or ``--output`` option with a single object
|
||||
download to redirect the command output to a specific file or to STDOUT
|
||||
(``-``). The ability to redirect the output to STDOUT enables you to
|
||||
pipe (``|``) data without saving it to disk first.
|
||||
|
||||
Upload and analyze log files
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. This example assumes that ``logtest`` directory contains the
|
||||
following log files.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
2010-11-16-21_access.log
|
||||
2010-11-16-22_access.log
|
||||
2010-11-15-21_access.log
|
||||
2010-11-15-22_access.log
|
||||
|
||||
|
||||
Each file uses the following line format.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Nov 15 21:53:52 lucid64 proxy-server - 127.0.0.1 15/Nov/2010/22/53/52 DELETE /v1/AUTH_cd4f57824deb4248a533f2c28bf156d3/2eefc05599d44df38a7f18b0b42ffedd HTTP/1.0 204 - \
|
||||
- test%3Atester%2CAUTH_tkcdab3c6296e249d7b7e2454ee57266ff - - - txaba5984c-aac7-460e-b04b-afc43f0c6571 - 0.0432
|
||||
|
||||
|
||||
#. Change into the ``logtest`` directory:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cd logtest
|
||||
|
||||
#. Upload the log files into the ``logtest`` container:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ swift -A http://swift-auth.com:11000/v1.0 -U test:tester -K testing upload logtest *.log
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
2010-11-16-21_access.log
|
||||
2010-11-16-22_access.log
|
||||
2010-11-15-21_access.log
|
||||
2010-11-15-22_access.log
|
||||
|
||||
#. Get statistics for the account:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ swift -A http://swift-auth.com:11000/v1.0 -U test:tester -K testing \
|
||||
-q stat
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Account: AUTH_cd4f57824deb4248a533f2c28bf156d3
|
||||
Containers: 1
|
||||
Objects: 4
|
||||
Bytes: 5888268
|
||||
|
||||
#. Get statistics for the ``logtest`` container:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ swift -A http://swift-auth.com:11000/v1.0 -U test:tester -K testing \
|
||||
stat logtest
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Account: AUTH_cd4f57824deb4248a533f2c28bf156d3
|
||||
Container: logtest
|
||||
Objects: 4
|
||||
Bytes: 5864468
|
||||
Read ACL:
|
||||
Write ACL:
|
||||
|
||||
#. List all objects in the logtest container:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ swift -A http:///swift-auth.com:11000/v1.0 -U test:tester -K testing \
|
||||
list logtest
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
2010-11-15-21_access.log
|
||||
2010-11-15-22_access.log
|
||||
2010-11-16-21_access.log
|
||||
2010-11-16-22_access.log
|
||||
|
||||
Download and analyze an object
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This example uses the ``-o`` option and a hyphen (``-``) to get
|
||||
information about an object.
|
||||
|
||||
Use the :command:`swift download` command to download the object. On this
|
||||
command, stream the output to ``awk`` to break down requests by return
|
||||
code and the date ``2200 on November 16th, 2010``.
|
||||
|
||||
Using the log line format, find the request type in column 9 and the
|
||||
return code in column 12.
|
||||
|
||||
After ``awk`` processes the output, it pipes it to ``sort`` and ``uniq
|
||||
-c`` to sum up the number of occurrences for each request type and
|
||||
return code combination.
|
||||
|
||||
#. Download an object:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ swift -A http://swift-auth.com:11000/v1.0 -U test:tester -K testing \
|
||||
download -o - logtest 2010-11-16-22_access.log | \
|
||||
awk '{ print $9"-"$12}' | sort | uniq -c
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
805 DELETE-204
|
||||
12 DELETE-404
|
||||
2 DELETE-409
|
||||
723 GET-200
|
||||
142 GET-204
|
||||
74 GET-206
|
||||
80 GET-304
|
||||
34 GET-401
|
||||
5 GET-403
|
||||
18 GET-404
|
||||
166 GET-412
|
||||
2 GET-416
|
||||
50 HEAD-200
|
||||
17 HEAD-204
|
||||
20 HEAD-401
|
||||
8 HEAD-404
|
||||
30 POST-202
|
||||
25 POST-204
|
||||
22 POST-400
|
||||
6 POST-404
|
||||
842 PUT-201
|
||||
2 PUT-202
|
||||
32 PUT-400
|
||||
4 PUT-403
|
||||
4 PUT-404
|
||||
2 PUT-411
|
||||
6 PUT-412
|
||||
6 PUT-413
|
||||
2 PUT-422
|
||||
8 PUT-499
|
||||
|
||||
#. Discover how many PUT requests are in each log file.
|
||||
|
||||
Use a bash for loop with awk and swift with the ``-o`` or
|
||||
``--output`` option and a hyphen (``-``) to discover how many
|
||||
PUT requests are in each log file.
|
||||
|
||||
Run the :command:`swift list` command to list objects in the logtest
|
||||
container. Then, for each item in the list, run the
|
||||
:command:`swift download -o -` command. Pipe the output into grep to
|
||||
filter the PUT requests. Finally, pipe into ``wc -l`` to count the lines.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ for f in `swift -A http://swift-auth.com:11000/v1.0 -U test:tester \
|
||||
-K testing list logtest` ; \
|
||||
do echo -ne "PUTS - " ; swift -A \
|
||||
http://swift-auth.com:11000/v1.0 -U test:tester \
|
||||
-K testing download -o - logtest $f | grep PUT | wc -l ; \
|
||||
done
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
2010-11-15-21_access.log - PUTS - 402
|
||||
2010-11-15-22_access.log - PUTS - 1091
|
||||
2010-11-16-21_access.log - PUTS - 892
|
||||
2010-11-16-22_access.log - PUTS - 910
|
||||
|
||||
#. List the object names that begin with a specified string.
|
||||
|
||||
#. Run the :command:`swift list -p 2010-11-15` command to list objects
|
||||
in the logtest container that begin with the ``2010-11-15`` string.
|
||||
|
||||
#. For each item in the list, run the :command:`swift download -o -` command.
|
||||
|
||||
#. Pipe the output to :command:`grep` and :command:`wc`.
|
||||
Use the :command:`echo` command to display the object name.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ for f in `swift -A http://swift-auth.com:11000/v1.0 -U test:tester \
|
||||
-K testing list -p 2010-11-15 logtest` ; \
|
||||
do echo -ne "$f - PUTS - " ; swift -A \
|
||||
http://127.0.0.1:11000/v1.0 -U test:tester \
|
||||
-K testing download -o - logtest $f | grep PUT | wc -l ; \
|
||||
done
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
2010-11-15-21_access.log - PUTS - 402
|
||||
2010-11-15-22_access.log - PUTS - 910
|
||||
|
@ -1,232 +0,0 @@
|
||||
===================================
|
||||
Manage Block Storage service quotas
|
||||
===================================
|
||||
|
||||
As an administrative user, you can update the OpenStack Block
|
||||
Storage service quotas for a project. You can also update the quota
|
||||
defaults for a new project.
|
||||
|
||||
**Block Storage quotas**
|
||||
|
||||
=================== =============================================
|
||||
Property name Defines the number of
|
||||
=================== =============================================
|
||||
gigabytes Volume gigabytes allowed for each project.
|
||||
snapshots Volume snapshots allowed for each project.
|
||||
volumes Volumes allowed for each project.
|
||||
=================== =============================================
|
||||
|
||||
View Block Storage quotas
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Administrative users can view Block Storage service quotas.
|
||||
|
||||
#. Obtain the project ID:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ project_id=$(openstack project show -f value -c id PROJECT_NAME)
|
||||
|
||||
#. List the default quotas for a project:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack quota show --default $OS_TENANT_ID
|
||||
+-----------------------+-------+
|
||||
| Field | Value |
|
||||
+-----------------------+-------+
|
||||
| backup-gigabytes | 1000 |
|
||||
| backups | 10 |
|
||||
| cores | 20 |
|
||||
| fixed-ips | -1 |
|
||||
| floating-ips | 50 |
|
||||
| gigabytes | 1000 |
|
||||
| gigabytes_lvmdriver-1 | -1 |
|
||||
| health_monitors | None |
|
||||
| injected-file-size | 10240 |
|
||||
| injected-files | 5 |
|
||||
| injected-path-size | 255 |
|
||||
| instances | 10 |
|
||||
| key-pairs | 100 |
|
||||
| l7_policies | None |
|
||||
| listeners | None |
|
||||
| load_balancers | None |
|
||||
| location | None |
|
||||
| name | None |
|
||||
| networks | 10 |
|
||||
| per-volume-gigabytes | -1 |
|
||||
| pools | None |
|
||||
| ports | 50 |
|
||||
| project | None |
|
||||
| project_id | None |
|
||||
| properties | 128 |
|
||||
| ram | 51200 |
|
||||
| rbac_policies | 10 |
|
||||
| routers | 10 |
|
||||
| secgroup-rules | 100 |
|
||||
| secgroups | 10 |
|
||||
| server-group-members | 10 |
|
||||
| server-groups | 10 |
|
||||
| snapshots | 10 |
|
||||
| snapshots_lvmdriver-1 | -1 |
|
||||
| subnet_pools | -1 |
|
||||
| subnets | 10 |
|
||||
| volumes | 10 |
|
||||
| volumes_lvmdriver-1 | -1 |
|
||||
+-----------------------+-------+
|
||||
|
||||
.. note::
|
||||
|
||||
Listing default quotas with the OpenStack command line client will
|
||||
provide all quotas for storage and network services. Previously, the
|
||||
:command:`cinder quota-defaults` command would list only storage
|
||||
quotas. You can use `PROJECT_ID` or `$OS_TENANT_NAME` arguments to
|
||||
show Block Storage service quotas. If the `PROJECT_ID` argument returns
|
||||
errors in locating resources, use `$OS_TENANT_NAME`.
|
||||
|
||||
#. View Block Storage service quotas for a project:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack quota show $OS_TENANT_ID
|
||||
+-----------------------+-------+
|
||||
| Field | Value |
|
||||
+-----------------------+-------+
|
||||
| backup-gigabytes | 1000 |
|
||||
| backups | 10 |
|
||||
| cores | 20 |
|
||||
| fixed-ips | -1 |
|
||||
| floating-ips | 50 |
|
||||
| gigabytes | 1000 |
|
||||
| gigabytes_lvmdriver-1 | -1 |
|
||||
| health_monitors | None |
|
||||
| injected-file-size | 10240 |
|
||||
| injected-files | 5 |
|
||||
| injected-path-size | 255 |
|
||||
| instances | 10 |
|
||||
| key-pairs | 100 |
|
||||
| l7_policies | None |
|
||||
| listeners | None |
|
||||
| load_balancers | None |
|
||||
| location | None |
|
||||
| name | None |
|
||||
| networks | 10 |
|
||||
| per-volume-gigabytes | -1 |
|
||||
| pools | None |
|
||||
| ports | 50 |
|
||||
| project | None |
|
||||
| project_id | None |
|
||||
| properties | 128 |
|
||||
| ram | 51200 |
|
||||
| rbac_policies | 10 |
|
||||
| routers | 10 |
|
||||
| secgroup-rules | 100 |
|
||||
| secgroups | 10 |
|
||||
| server-group-members | 10 |
|
||||
| server-groups | 10 |
|
||||
| snapshots | 10 |
|
||||
| snapshots_lvmdriver-1 | -1 |
|
||||
| subnet_pools | -1 |
|
||||
| subnets | 10 |
|
||||
| volumes | 10 |
|
||||
| volumes_lvmdriver-1 | -1 |
|
||||
+-----------------------+-------+
|
||||
|
||||
|
||||
#. Show the current usage of a per-project quota:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder quota-usage $project_id
|
||||
+-----------------------+--------+----------+-------+
|
||||
| Type | In_use | Reserved | Limit |
|
||||
+-----------------------+--------+----------+-------+
|
||||
| backup_gigabytes | 0 | 0 | 1000 |
|
||||
| backups | 0 | 0 | 10 |
|
||||
| gigabytes | 0 | 0 | 1000 |
|
||||
| gigabytes_lvmdriver-1 | 0 | 0 | -1 |
|
||||
| per_volume_gigabytes | 0 | 0 | -1 |
|
||||
| snapshots | 0 | 0 | 10 |
|
||||
| snapshots_lvmdriver-1 | 0 | 0 | -1 |
|
||||
| volumes | 0 | 0 | 10 |
|
||||
| volumes_lvmdriver-1 | 0 | 0 | -1 |
|
||||
+-----------------------+--------+----------+-------+
|
||||
|
||||
|
||||
Edit and update Block Storage service quotas
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Administrative users can edit and update Block Storage
|
||||
service quotas.
|
||||
|
||||
#. To update a default value for a new project,
|
||||
update the property in the :guilabel:`cinder.quota`
|
||||
section of the ``/etc/cinder/cinder.conf`` file.
|
||||
For more information, see the `Block Storage service
|
||||
<https://docs.openstack.org/ocata/config-reference/block-storage.html>`_
|
||||
in OpenStack Configuration Reference.
|
||||
|
||||
#. To update Block Storage service quotas for an existing project
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack quota set --QUOTA_NAME QUOTA_VALUE PROJECT_ID
|
||||
|
||||
Replace ``QUOTA_NAME`` with the quota that is to be updated,
|
||||
``QUOTA_VALUE`` with the required new value. Use the :command:`openstack quota show`
|
||||
command with ``PROJECT_ID``, which is the required project ID.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack quota set --volumes 15 $project_id
|
||||
$ openstack quota show $project_id
|
||||
+-----------------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+----------------------------------+
|
||||
| backup-gigabytes | 1000 |
|
||||
| backups | 10 |
|
||||
| cores | 20 |
|
||||
| fixed-ips | -1 |
|
||||
| floating-ips | 29 |
|
||||
| gigabytes | 1000 |
|
||||
| gigabytes_lvmdriver-1 | -1 |
|
||||
| health_monitors | None |
|
||||
| injected-file-size | 10240 |
|
||||
| injected-files | 5 |
|
||||
| injected-path-size | 255 |
|
||||
| instances | 10 |
|
||||
| key-pairs | 100 |
|
||||
| l7_policies | None |
|
||||
| listeners | None |
|
||||
| load_balancers | None |
|
||||
| location | None |
|
||||
| name | None |
|
||||
| networks | 10 |
|
||||
| per-volume-gigabytes | -1 |
|
||||
| pools | None |
|
||||
| ports | 50 |
|
||||
| project | e436339c7f9c476cb3120cf3b9667377 |
|
||||
| project_id | None |
|
||||
| properties | 128 |
|
||||
| ram | 51200 |
|
||||
| rbac_policies | 10 |
|
||||
| routers | 10 |
|
||||
| secgroup-rules | 100 |
|
||||
| secgroups | 10 |
|
||||
| server-group-members | 10 |
|
||||
| server-groups | 10 |
|
||||
| snapshots | 10 |
|
||||
| snapshots_lvmdriver-1 | -1 |
|
||||
| subnet_pools | -1 |
|
||||
| subnets | 10 |
|
||||
| volumes | 15 |
|
||||
| volumes_lvmdriver-1 | -1 |
|
||||
+-----------------------+----------------------------------+
|
||||
|
||||
#. To clear per-project quota limits:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder quota-delete PROJECT_ID
|
@ -1,58 +0,0 @@
|
||||
===============================
|
||||
Manage Block Storage scheduling
|
||||
===============================
|
||||
|
||||
As an administrative user, you have some control over which volume
|
||||
back end your volumes reside on. You can specify affinity or
|
||||
anti-affinity between two volumes. Affinity between volumes means
|
||||
that they are stored on the same back end, whereas anti-affinity
|
||||
means that they are stored on different back ends.
|
||||
|
||||
For information on how to set up multiple back ends for Cinder,
|
||||
refer to :ref:`multi_backend`.
|
||||
|
||||
Example Usages
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
#. Create a new volume on the same back end as Volume_A:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume create --hint same_host=Volume_A-UUID \
|
||||
--size SIZE VOLUME_NAME
|
||||
|
||||
#. Create a new volume on a different back end than Volume_A:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume create --hint different_host=Volume_A-UUID \
|
||||
--size SIZE VOLUME_NAME
|
||||
|
||||
#. Create a new volume on the same back end as Volume_A and Volume_B:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume create --hint same_host=Volume_A-UUID \
|
||||
--hint same_host=Volume_B-UUID --size SIZE VOLUME_NAME
|
||||
|
||||
Or:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume create --hint same_host="[Volume_A-UUID, \
|
||||
Volume_B-UUID]" --size SIZE VOLUME_NAME
|
||||
|
||||
#. Create a new volume on a different back end than both Volume_A and
|
||||
Volume_B:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume create --hint different_host=Volume_A-UUID \
|
||||
--hint different_host=Volume_B-UUID --size SIZE VOLUME_NAME
|
||||
|
||||
Or:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume create --hint different_host="[Volume_A-UUID, \
|
||||
Volume_B-UUID]" --size SIZE VOLUME_NAME
|
@ -1,158 +0,0 @@
|
||||
============================================
|
||||
Create and manage services and service users
|
||||
============================================
|
||||
|
||||
The Identity service enables you to define services, as
|
||||
follows:
|
||||
|
||||
- Service catalog template. The Identity service acts
|
||||
as a service catalog of endpoints for other OpenStack
|
||||
services. The ``/etc/keystone/default_catalog.templates``
|
||||
template file defines the endpoints for services. When
|
||||
the Identity service uses a template file back end,
|
||||
any changes that are made to the endpoints are cached.
|
||||
These changes do not persist when you restart the
|
||||
service or reboot the machine.
|
||||
- An SQL back end for the catalog service. When the
|
||||
Identity service is online, you must add the services
|
||||
to the catalog. When you deploy a system for
|
||||
production, use the SQL back end.
|
||||
|
||||
The ``auth_token`` middleware supports the
|
||||
use of either a shared secret or users for each
|
||||
service.
|
||||
|
||||
To authenticate users against the Identity service, you must
|
||||
create a service user for each OpenStack service. For example,
|
||||
create a service user for the Compute, Block Storage, and
|
||||
Networking services.
|
||||
|
||||
To configure the OpenStack services with service users,
|
||||
create a project for all services and create users for each
|
||||
service. Assign the admin role to each service user and
|
||||
project pair. This role enables users to validate tokens and
|
||||
authenticate and authorize other user requests.
|
||||
|
||||
Create a service
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
#. List the available services:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service list
|
||||
+----------------------------------+----------+------------+
|
||||
| ID | Name | Type |
|
||||
+----------------------------------+----------+------------+
|
||||
| 9816f1faaa7c4842b90fb4821cd09223 | cinder | volume |
|
||||
| 1250f64f31e34dcd9a93d35a075ddbe1 | cinderv2 | volumev2 |
|
||||
| da8cf9f8546b4a428c43d5e032fe4afc | ec2 | ec2 |
|
||||
| 5f105eeb55924b7290c8675ad7e294ae | glance | image |
|
||||
| dcaa566e912e4c0e900dc86804e3dde0 | keystone | identity |
|
||||
| 4a715cfbc3664e9ebf388534ff2be76a | nova | compute |
|
||||
| 1aed4a6cf7274297ba4026cf5d5e96c5 | novav21 | computev21 |
|
||||
| bed063c790634c979778551f66c8ede9 | neutron | network |
|
||||
| 6feb2e0b98874d88bee221974770e372 | s3 | s3 |
|
||||
+----------------------------------+----------+------------+
|
||||
|
||||
#. To create a service, run this command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service create --name SERVICE_NAME --description SERVICE_DESCRIPTION SERVICE_TYPE
|
||||
|
||||
The arguments are:
|
||||
- ``service_name``: the unique name of the new service.
|
||||
- ``service_type``: the service type, such as ``identity``,
|
||||
``compute``, ``network``, ``image``, ``object-store``
|
||||
or any other service identifier string.
|
||||
- ``service_description``: the description of the service.
|
||||
|
||||
For example, to create a ``swift`` service of type
|
||||
``object-store``, run this command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service create --name swift --description "object store service" object-store
|
||||
+-------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------+
|
||||
| description | object store service |
|
||||
| enabled | True |
|
||||
| id | 84c23f4b942c44c38b9c42c5e517cd9a |
|
||||
| name | swift |
|
||||
| type | object-store |
|
||||
+-------------+----------------------------------+
|
||||
|
||||
#. To get details for a service, run this command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service show SERVICE_TYPE|SERVICE_NAME|SERVICE_ID
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service show object-store
|
||||
+-------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------+
|
||||
| description | object store service |
|
||||
| enabled | True |
|
||||
| id | 84c23f4b942c44c38b9c42c5e517cd9a |
|
||||
| name | swift |
|
||||
| type | object-store |
|
||||
+-------------+----------------------------------+
|
||||
|
||||
Create service users
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Create a project for the service users.
|
||||
Typically, this project is named ``service``,
|
||||
but choose any name you like:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack project create service --domain default
|
||||
+-------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------+
|
||||
| description | None |
|
||||
| domain_id | e601210181f54843b51b3edff41d4980 |
|
||||
| enabled | True |
|
||||
| id | 3e9f3f5399624b2db548d7f871bd5322 |
|
||||
| is_domain | False |
|
||||
| name | service |
|
||||
| parent_id | e601210181f54843b51b3edff41d4980 |
|
||||
+-------------+----------------------------------+
|
||||
|
||||
#. Create service users for the relevant services for your
|
||||
deployment.
|
||||
|
||||
#. Assign the admin role to the user-project pair.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role add --project service --user SERVICE_USER_NAME admin
|
||||
+-------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------+----------------------------------+
|
||||
| id | 233109e756c1465292f31e7662b429b1 |
|
||||
| name | admin |
|
||||
+-------+----------------------------------+
|
||||
|
||||
Delete a service
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
To delete a specified service, specify its ID.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service delete SERVICE_TYPE|SERVICE_NAME|SERVICE_ID
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service delete object-store
|
@ -1,166 +0,0 @@
|
||||
==============
|
||||
Manage flavors
|
||||
==============
|
||||
|
||||
In OpenStack, flavors define the compute, memory, and
|
||||
storage capacity of nova computing instances. To put it
|
||||
simply, a flavor is an available hardware configuration for a
|
||||
server. It defines the ``size`` of a virtual server
|
||||
that can be launched.
|
||||
|
||||
.. note::
|
||||
|
||||
Flavors can also determine on which compute host a flavor
|
||||
can be used to launch an instance. For information
|
||||
about customizing flavors, refer to :ref:`compute-flavors`.
|
||||
|
||||
A flavor consists of the following parameters:
|
||||
|
||||
Flavor ID
|
||||
Unique ID (integer or UUID) for the new flavor. If
|
||||
specifying 'auto', a UUID will be automatically generated.
|
||||
|
||||
Name
|
||||
Name for the new flavor.
|
||||
|
||||
VCPUs
|
||||
Number of virtual CPUs to use.
|
||||
|
||||
Memory MB
|
||||
Amount of RAM to use (in megabytes).
|
||||
|
||||
Root Disk GB
|
||||
Amount of disk space (in gigabytes) to use for
|
||||
the root (/) partition.
|
||||
|
||||
Ephemeral Disk GB
|
||||
Amount of disk space (in gigabytes) to use for
|
||||
the ephemeral partition. If unspecified, the value
|
||||
is ``0`` by default.
|
||||
Ephemeral disks offer machine local disk storage
|
||||
linked to the lifecycle of a VM instance. When a
|
||||
VM is terminated, all data on the ephemeral disk
|
||||
is lost. Ephemeral disks are not included in any
|
||||
snapshots.
|
||||
|
||||
Swap
|
||||
Amount of swap space (in megabytes) to use. If
|
||||
unspecified, the value is ``0`` by default.
|
||||
|
||||
RXTX Factor
|
||||
Optional property that allows servers with a different bandwidth be
|
||||
created with the RXTX Factor. The default value is ``1.0``. That is,
|
||||
the new bandwidth is the same as that of the attached network. The
|
||||
RXTX Factor is available only for Xen or NSX based systems.
|
||||
|
||||
Is Public
|
||||
Boolean value defines whether the flavor is available to all users.
|
||||
Defaults to ``True``.
|
||||
|
||||
Extra Specs
|
||||
Key and value pairs that define on which compute nodes a
|
||||
flavor can run. These pairs must match corresponding pairs on
|
||||
the compute nodes. It can be used to implement special resources, such
|
||||
as flavors that run on only compute nodes with GPU hardware.
|
||||
|
||||
As of Newton, there are no default flavors. The following table
|
||||
lists the default flavors for Mitaka and earlier.
|
||||
|
||||
============ ========= =============== ===============
|
||||
Flavor VCPUs Disk (in GB) RAM (in MB)
|
||||
============ ========= =============== ===============
|
||||
m1.tiny 1 1 512
|
||||
m1.small 1 20 2048
|
||||
m1.medium 2 40 4096
|
||||
m1.large 4 80 8192
|
||||
m1.xlarge 8 160 16384
|
||||
============ ========= =============== ===============
|
||||
|
||||
You can create and manage flavors with the
|
||||
:command:`openstack flavor` commands provided by the ``python-openstackclient``
|
||||
package.
|
||||
|
||||
Create a flavor
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
#. List flavors to show the ID and name, the amount
|
||||
of memory, the amount of disk space for the root
|
||||
partition and for the ephemeral partition, the
|
||||
swap, and the number of virtual CPUs for each
|
||||
flavor:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor list
|
||||
|
||||
#. To create a flavor, specify a name, ID, RAM
|
||||
size, disk size, and the number of VCPUs for the
|
||||
flavor, as follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor create FLAVOR_NAME --id FLAVOR_ID --ram RAM_IN_MB --disk ROOT_DISK_IN_GB --vcpus NUMBER_OF_VCPUS
|
||||
|
||||
.. note::
|
||||
|
||||
Unique ID (integer or UUID) for the new flavor. If
|
||||
specifying 'auto', a UUID will be automatically generated.
|
||||
|
||||
Here is an example with additional optional
|
||||
parameters filled in that creates a public ``extra
|
||||
tiny`` flavor that automatically gets an ID
|
||||
assigned, with 256 MB memory, no disk space, and
|
||||
one VCPU. The rxtx-factor indicates the slice of
|
||||
bandwidth that the instances with this flavor can
|
||||
use (through the Virtual Interface (vif) creation
|
||||
in the hypervisor):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor create --public m1.extra_tiny --id auto --ram 256 --disk 0 --vcpus 1 --rxtx-factor 1
|
||||
|
||||
#. If an individual user or group of users needs a custom
|
||||
flavor that you do not want other projects to have access to,
|
||||
you can change the flavor's access to make it a private flavor.
|
||||
See
|
||||
`Private Flavors in the OpenStack Operations Guide <https://docs.openstack.org/ops-guide/ops-user-facing-operations.html#private-flavors>`_.
|
||||
|
||||
For a list of optional parameters, run this command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack help flavor create
|
||||
|
||||
#. After you create a flavor, assign it to a
|
||||
project by specifying the flavor name or ID and
|
||||
the project ID:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova flavor-access-add FLAVOR TENANT_ID
|
||||
|
||||
#. In addition, you can set or unset ``extra_spec`` for the existing flavor.
|
||||
The ``extra_spec`` metadata keys can influence the instance directly when
|
||||
it is launched. If a flavor sets the
|
||||
``extra_spec key/value quota:vif_outbound_peak=65536``, the instance's
|
||||
outbound peak bandwidth I/O should be LTE 512 Mbps. There are several
|
||||
aspects that can work for an instance including ``CPU limits``,
|
||||
``Disk tuning``, ``Bandwidth I/O``, ``Watchdog behavior``, and
|
||||
``Random-number generator``.
|
||||
For information about supporting metadata keys, see
|
||||
:ref:`compute-flavors`.
|
||||
|
||||
For a list of optional parameters, run this command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova help flavor-key
|
||||
|
||||
Delete a flavor
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Delete a specified flavor, as follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor delete FLAVOR_ID
|
@ -1,379 +0,0 @@
|
||||
=================================
|
||||
Manage projects, users, and roles
|
||||
=================================
|
||||
|
||||
As an administrator, you manage projects, users, and
|
||||
roles. Projects are organizational units in the cloud to which
|
||||
you can assign users. Projects are also known as *projects* or
|
||||
*accounts*. Users can be members of one or more projects. Roles
|
||||
define which actions users can perform. You assign roles to
|
||||
user-project pairs.
|
||||
|
||||
You can define actions for OpenStack service roles in the
|
||||
``/etc/PROJECT/policy.json`` files. For example, define actions for
|
||||
Compute service roles in the ``/etc/nova/policy.json`` file.
|
||||
|
||||
You can manage projects, users, and roles independently from each other.
|
||||
|
||||
During cloud set up, the operator defines at least one project, user,
|
||||
and role.
|
||||
|
||||
You can add, update, and delete projects and users, assign users to
|
||||
one or more projects, and change or remove the assignment. To enable or
|
||||
temporarily disable a project or user, update that project or user.
|
||||
You can also change quotas at the project level.
|
||||
|
||||
Before you can delete a user account, you must remove the user account
|
||||
from its primary project.
|
||||
|
||||
Before you can run client commands, you must download and
|
||||
source an OpenStack RC file. See `Download and source the OpenStack RC file
|
||||
<https://docs.openstack.org/user-guide/common/cli-set-environment-variables-using-openstack-rc.html#download-and-source-the-openstack-rc-file>`_.
|
||||
|
||||
Projects
|
||||
~~~~~~~~
|
||||
|
||||
A project is a group of zero or more users. In Compute, a project owns
|
||||
virtual machines. In Object Storage, a project owns containers. Users
|
||||
can be associated with more than one project. Each project and user
|
||||
pairing can have a role associated with it.
|
||||
|
||||
List projects
|
||||
-------------
|
||||
|
||||
List all projects with their ID, name, and whether they are
|
||||
enabled or disabled:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack project list
|
||||
+----------------------------------+--------------------+
|
||||
| ID | Name |
|
||||
+----------------------------------+--------------------+
|
||||
| f7ac731cc11f40efbc03a9f9e1d1d21f | admin |
|
||||
| c150ab41f0d9443f8874e32e725a4cc8 | alt_demo |
|
||||
| a9debfe41a6d4d09a677da737b907d5e | demo |
|
||||
| 9208739195a34c628c58c95d157917d7 | invisible_to_admin |
|
||||
| 3943a53dc92a49b2827fae94363851e1 | service |
|
||||
| 80cab5e1f02045abad92a2864cfd76cb | test_project |
|
||||
+----------------------------------+--------------------+
|
||||
|
||||
Create a project
|
||||
----------------
|
||||
|
||||
Create a project named ``new-project``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack project create --description 'my new project' new-project \
|
||||
--domain default
|
||||
+-------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------+
|
||||
| description | my new project |
|
||||
| domain_id | e601210181f54843b51b3edff41d4980 |
|
||||
| enabled | True |
|
||||
| id | 1a4a0618b306462c9830f876b0bd6af2 |
|
||||
| is_domain | False |
|
||||
| name | new-project |
|
||||
| parent_id | e601210181f54843b51b3edff41d4980 |
|
||||
+-------------+----------------------------------+
|
||||
|
||||
Update a project
|
||||
----------------
|
||||
|
||||
Specify the project ID to update a project. You can update the name,
|
||||
description, and enabled status of a project.
|
||||
|
||||
- To temporarily disable a project:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack project set PROJECT_ID --disable
|
||||
|
||||
- To enable a disabled project:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack project set PROJECT_ID --enable
|
||||
|
||||
- To update the name of a project:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack project set PROJECT_ID --name project-new
|
||||
|
||||
- To verify your changes, show information for the updated project:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack project show PROJECT_ID
|
||||
+-------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------+
|
||||
| description | my new project |
|
||||
| enabled | True |
|
||||
| id | 0b0b995694234521bf93c792ed44247f |
|
||||
| name | new-project |
|
||||
| properties | |
|
||||
+-------------+----------------------------------+
|
||||
|
||||
Delete a project
|
||||
----------------
|
||||
|
||||
Specify the project ID to delete a project:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack project delete PROJECT_ID
|
||||
|
||||
Users
|
||||
~~~~~
|
||||
|
||||
List users
|
||||
----------
|
||||
|
||||
List all users:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack user list
|
||||
+----------------------------------+----------+
|
||||
| ID | Name |
|
||||
+----------------------------------+----------+
|
||||
| 352b37f5c89144d4ad0534139266d51f | admin |
|
||||
| 86c0de739bcb4802b8dc786921355813 | demo |
|
||||
| 32ec34aae8ea432e8af560a1cec0e881 | glance |
|
||||
| 7047fcb7908e420cb36e13bbd72c972c | nova |
|
||||
+----------------------------------+----------+
|
||||
|
||||
Create a user
|
||||
-------------
|
||||
|
||||
To create a user, you must specify a name. Optionally, you can
|
||||
specify a project ID, password, and email address. It is recommended
|
||||
that you include the project ID and password because the user cannot
|
||||
log in to the dashboard without this information.
|
||||
|
||||
Create the ``new-user`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack user create --project new-project --password PASSWORD new-user
|
||||
+------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+------------+----------------------------------+
|
||||
| email | None |
|
||||
| enabled | True |
|
||||
| id | 6322872d9c7e445dbbb49c1f9ca28adc |
|
||||
| name | new-user |
|
||||
| project_id | 0b0b995694234521bf93c792ed44247f |
|
||||
| username | new-user |
|
||||
+------------+----------------------------------+
|
||||
|
||||
Update a user
|
||||
-------------
|
||||
|
||||
You can update the name, email address, and enabled status for a user.
|
||||
|
||||
- To temporarily disable a user account:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack user set USER_NAME --disable
|
||||
|
||||
If you disable a user account, the user cannot log in to the
|
||||
dashboard. However, data for the user account is maintained, so you
|
||||
can enable the user at any time.
|
||||
|
||||
- To enable a disabled user account:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack user set USER_NAME --enable
|
||||
|
||||
- To change the name and description for a user account:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack user set USER_NAME --name user-new --email new-user@example.com
|
||||
User has been updated.
|
||||
|
||||
Delete a user
|
||||
-------------
|
||||
|
||||
Delete a specified user account:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack user delete USER_NAME
|
||||
|
||||
Roles and role assignments
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
List available roles
|
||||
--------------------
|
||||
|
||||
List the available roles:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role list
|
||||
+----------------------------------+---------------+
|
||||
| ID | Name |
|
||||
+----------------------------------+---------------+
|
||||
| 71ccc37d41c8491c975ae72676db687f | Member |
|
||||
| 149f50a1fe684bfa88dae76a48d26ef7 | ResellerAdmin |
|
||||
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ |
|
||||
| 6ecf391421604da985db2f141e46a7c8 | admin |
|
||||
| deb4fffd123c4d02a907c2c74559dccf | anotherrole |
|
||||
+----------------------------------+---------------+
|
||||
|
||||
Create a role
|
||||
-------------
|
||||
|
||||
Users can be members of multiple projects. To assign users to multiple
|
||||
projects, define a role and assign that role to a user-project pair.
|
||||
|
||||
Create the ``new-role`` role:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role create new-role
|
||||
+-----------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-----------+----------------------------------+
|
||||
| domain_id | None |
|
||||
| id | a34425c884c74c8881496dc2c2e84ffc |
|
||||
| name | new-role |
|
||||
+-----------+----------------------------------+
|
||||
|
||||
.. note::
|
||||
|
||||
If you are using identity v3, you may need to use the
|
||||
``--domain`` option with a specific domain name.
|
||||
|
||||
Assign a role
|
||||
-------------
|
||||
|
||||
To assign a user to a project, you must assign the role to a
|
||||
user-project pair. To do this, you need the user, role, and project
|
||||
IDs.
|
||||
|
||||
#. List users and note the user ID you want to assign to the role:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack user list
|
||||
+----------------------------------+----------+
|
||||
| ID | Name |
|
||||
+----------------------------------+----------+
|
||||
| 6ab5800949644c3e8fb86aaeab8275c8 | admin |
|
||||
| dfc484b9094f4390b9c51aba49a6df34 | demo |
|
||||
| 55389ff02f5e40cf85a053cc1cacb20c | alt_demo |
|
||||
| bc52bcfd882f4d388485451c4a29f8e0 | nova |
|
||||
| 255388ffa6e54ec991f584cb03085e77 | glance |
|
||||
| 48b6e6dec364428da89ba67b654fac03 | cinder |
|
||||
| c094dd5a8e1d4010832c249d39541316 | neutron |
|
||||
| 6322872d9c7e445dbbb49c1f9ca28adc | new-user |
|
||||
+----------------------------------+----------+
|
||||
|
||||
#. List role IDs and note the role ID you want to assign:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role list
|
||||
+----------------------------------+---------------+
|
||||
| ID | Name |
|
||||
+----------------------------------+---------------+
|
||||
| 71ccc37d41c8491c975ae72676db687f | Member |
|
||||
| 149f50a1fe684bfa88dae76a48d26ef7 | ResellerAdmin |
|
||||
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ |
|
||||
| 6ecf391421604da985db2f141e46a7c8 | admin |
|
||||
| deb4fffd123c4d02a907c2c74559dccf | anotherrole |
|
||||
| bef1f95537914b1295da6aa038ef4de6 | new-role |
|
||||
+----------------------------------+---------------+
|
||||
|
||||
#. List projects and note the project ID you want to assign to the role:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack project list
|
||||
+----------------------------------+--------------------+
|
||||
| ID | Name |
|
||||
+----------------------------------+--------------------+
|
||||
| 0b0b995694234521bf93c792ed44247f | new-project |
|
||||
| 29c09e68e6f741afa952a837e29c700b | admin |
|
||||
| 3a7ab11d3be74d3c9df3ede538840966 | invisible_to_admin |
|
||||
| 71a2c23bab884c609774c2db6fcee3d0 | service |
|
||||
| 87e48a8394e34d13afc2646bc85a0d8c | alt_demo |
|
||||
| fef7ae86615f4bf5a37c1196d09bcb95 | demo |
|
||||
+----------------------------------+--------------------+
|
||||
|
||||
#. Assign a role to a user-project pair:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role add --user USER_NAME --project TENANT_ID ROLE_NAME
|
||||
|
||||
For example, assign the ``new-role`` role to the ``demo`` and
|
||||
``test-project`` pair:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role add --user demo --project test-project new-role
|
||||
|
||||
#. Verify the role assignment:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role assignment list --user USER_NAME \
|
||||
--project PROJECT_ID --names
|
||||
+----------------------------------+-------------+---------+------+
|
||||
| ID | Name | Project | User |
|
||||
+----------------------------------+-------------+---------+------+
|
||||
| a34425c884c74c8881496dc2c2e84ffc | new-role | demo | demo |
|
||||
| 04a7e3192c0745a2b1e3d2baf5a3ee0f | Member | demo | demo |
|
||||
| 62bcf3e27eef4f648eb72d1f9920f6e5 | anotherrole | demo | demo |
|
||||
+----------------------------------+-------------+---------+------+
|
||||
|
||||
.. note::
|
||||
|
||||
Before the Newton release, users would run
|
||||
the :command:`openstack role list --user USER_NAME --project TENANT_ID` command to
|
||||
verify the role assignment.
|
||||
|
||||
View role details
|
||||
-----------------
|
||||
|
||||
View details for a specified role:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role show ROLE_NAME
|
||||
+-----------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-----------+----------------------------------+
|
||||
| domain_id | None |
|
||||
| id | a34425c884c74c8881496dc2c2e84ffc |
|
||||
| name | new-role |
|
||||
+-----------+----------------------------------+
|
||||
|
||||
Remove a role
|
||||
-------------
|
||||
|
||||
Remove a role from a user-project pair:
|
||||
|
||||
#. Run the :command:`openstack role remove` command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role remove --user USER_NAME --project TENANT_ID ROLE_NAME
|
||||
|
||||
#. Verify the role removal:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role list --user USER_NAME --project TENANT_ID
|
||||
|
||||
If the role was removed, the command output omits the removed role.
|
@ -1,9 +0,0 @@
|
||||
===============
|
||||
Manage services
|
||||
===============
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
cli-keystone-manage-services.rst
|
||||
cli-nova-manage-services.rst
|
@ -1,40 +0,0 @@
|
||||
.. _share:
|
||||
|
||||
=============
|
||||
Manage shares
|
||||
=============
|
||||
|
||||
A share is provided by file storage. You can give access to a share to
|
||||
instances. To create and manage shares, use ``manila`` client commands.
|
||||
|
||||
Migrate a share
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
As an administrator, you can migrate a share with its data from one
|
||||
location to another in a manner that is transparent to users and
|
||||
workloads.
|
||||
|
||||
Possible use cases for data migration include:
|
||||
|
||||
- Bring down a physical storage device for maintenance without
|
||||
disrupting workloads.
|
||||
|
||||
- Modify the properties of a share.
|
||||
|
||||
- Free up space in a thinly-provisioned back end.
|
||||
|
||||
Migrate a share with the :command:`manila migrate` command, as shown in the
|
||||
following example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ manila migrate shareID destinationHost --force-host-copy True|False
|
||||
|
||||
In this example, ``--force-host-copy True`` forces the generic
|
||||
host-based migration mechanism and bypasses any driver optimizations.
|
||||
``destinationHost`` is in this format ``host#pool`` which includes
|
||||
destination host and pool.
|
||||
|
||||
.. note::
|
||||
|
||||
If the user is not an administrator, the migration fails.
|
@ -1,549 +0,0 @@
|
||||
================================
|
||||
Manage Networking service quotas
|
||||
================================
|
||||
|
||||
A quota limits the number of available resources. A default
|
||||
quota might be enforced for all projects. When you try to create
|
||||
more resources than the quota allows, an error occurs:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create test_net
|
||||
Quota exceeded for resources: ['network']
|
||||
|
||||
Per-project quota configuration is also supported by the quota
|
||||
extension API. See :ref:`cfg_quotas_per_tenant` for details.
|
||||
|
||||
Basic quota configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In the Networking default quota mechanism, all projects have
|
||||
the same quota values, such as the number of resources that a
|
||||
project can create.
|
||||
|
||||
The quota value is defined in the OpenStack Networking
|
||||
``/etc/neutron/neutron.conf`` configuration file. This example shows the
|
||||
default quota values:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[quotas]
|
||||
# number of networks allowed per tenant, and minus means unlimited
|
||||
quota_network = 100
|
||||
|
||||
# number of subnets allowed per tenant, and minus means unlimited
|
||||
quota_subnet = 100
|
||||
|
||||
# number of ports allowed per tenant, and minus means unlimited
|
||||
quota_port = 500
|
||||
|
||||
# default driver to use for quota checks
|
||||
quota_driver = neutron.quota.ConfDriver
|
||||
|
||||
OpenStack Networking also supports quotas for L3 resources:
|
||||
router and floating IP. Add these lines to the
|
||||
``quotas`` section in the ``/etc/neutron/neutron.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[quotas]
|
||||
# number of routers allowed per tenant, and minus means unlimited
|
||||
quota_router = 10
|
||||
|
||||
# number of floating IPs allowed per tenant, and minus means unlimited
|
||||
quota_floatingip = 50
|
||||
|
||||
OpenStack Networking also supports quotas for security group
|
||||
resources: number of security groups and the number of rules for
|
||||
each security group. Add these lines to the
|
||||
``quotas`` section in the ``/etc/neutron/neutron.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[quotas]
|
||||
# number of security groups per tenant, and minus means unlimited
|
||||
quota_security_group = 10
|
||||
|
||||
# number of security rules allowed per tenant, and minus means unlimited
|
||||
quota_security_group_rule = 100
|
||||
|
||||
.. _cfg_quotas_per_tenant:
|
||||
|
||||
Configure per-project quotas
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
OpenStack Networking also supports per-project quota limit by
|
||||
quota extension API.
|
||||
|
||||
Use these commands to manage per-project quotas:
|
||||
|
||||
neutron quota-delete
|
||||
Delete defined quotas for a specified project
|
||||
|
||||
openstack quota show
|
||||
Lists defined quotas for all projects
|
||||
|
||||
openstack quota show PROJECT_ID
|
||||
Shows quotas for a specified project
|
||||
|
||||
neutron quota-default-show
|
||||
Show default quotas for a specified project
|
||||
|
||||
openstack quota set
|
||||
Updates quotas for a specified project
|
||||
|
||||
Only users with the ``admin`` role can change a quota value. By default,
|
||||
the default set of quotas are enforced for all projects, so no
|
||||
:command:`quota-create` command exists.
|
||||
|
||||
#. Configure Networking to show per-project quotas
|
||||
|
||||
Set the ``quota_driver`` option in the ``/etc/neutron/neutron.conf`` file.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
quota_driver = neutron.db.quota_db.DbQuotaDriver
|
||||
|
||||
When you set this option, the output for Networking commands shows ``quotas``.
|
||||
|
||||
#. List Networking extensions.
|
||||
|
||||
To list the Networking extensions, run this command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack extension list --network
|
||||
|
||||
The command shows the ``quotas`` extension, which provides
|
||||
per-project quota management support.
|
||||
|
||||
.. note::
|
||||
|
||||
Many of the extensions shown below are supported in the Mitaka release and later.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
+------------------------+------------------------+--------------------------+
|
||||
| Name | Alias | Description |
|
||||
+------------------------+------------------------+--------------------------+
|
||||
| ... | ... | ... |
|
||||
| Quota management | quotas | Expose functions for |
|
||||
| support | | quotas management per |
|
||||
| | | tenant |
|
||||
| ... | ... | ... |
|
||||
+------------------------+------------------------+--------------------------+
|
||||
|
||||
#. Show information for the quotas extension.
|
||||
|
||||
To show information for the ``quotas`` extension, run this command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack extension show quotas
|
||||
+-------------+---------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+---------------------------------------------------+
|
||||
| Alias | quotas |
|
||||
| Description | Expose functions for quotas management per tenant |
|
||||
| Links | [] |
|
||||
| Name | Quota management support |
|
||||
| Namespace | |
|
||||
| Updated | |
|
||||
+-------------+---------------------------------------------------+
|
||||
|
||||
.. note::
|
||||
|
||||
:command:`openstack extension show` is only supported currently by networking
|
||||
v2.
|
||||
|
||||
.. note::
|
||||
|
||||
Only some plug-ins support per-project quotas.
|
||||
Specifically, Open vSwitch, Linux Bridge, and VMware NSX
|
||||
support them, but new versions of other plug-ins might
|
||||
bring additional functionality. See the documentation for
|
||||
each plug-in.
|
||||
|
||||
#. List project's default quotas.
|
||||
|
||||
The :command:`openstack quota show` command lists quotas for the current
|
||||
project.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack quota show
|
||||
+-----------------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+----------------------------------+
|
||||
| backup-gigabytes | 1000 |
|
||||
| backups | 10 |
|
||||
| cores | 20 |
|
||||
| fixed-ips | -1 |
|
||||
| floating-ips | 50 |
|
||||
| gigabytes | 1000 |
|
||||
| gigabytes_lvmdriver-1 | -1 |
|
||||
| health_monitors | None |
|
||||
| injected-file-size | 10240 |
|
||||
| injected-files | 5 |
|
||||
| injected-path-size | 255 |
|
||||
| instances | 10 |
|
||||
| key-pairs | 100 |
|
||||
| l7_policies | None |
|
||||
| listeners | None |
|
||||
| load_balancers | None |
|
||||
| location | None |
|
||||
| name | None |
|
||||
| networks | 100 |
|
||||
| per-volume-gigabytes | -1 |
|
||||
| pools | None |
|
||||
| ports | 500 |
|
||||
| project | e436339c7f9c476cb3120cf3b9667377 |
|
||||
| project_id | None |
|
||||
| properties | 128 |
|
||||
| ram | 51200 |
|
||||
| rbac_policies | 10 |
|
||||
| routers | 10 |
|
||||
| secgroup-rules | 100 |
|
||||
| secgroups | 10 |
|
||||
| server-group-members | 10 |
|
||||
| server-groups | 10 |
|
||||
| snapshots | 10 |
|
||||
| snapshots_lvmdriver-1 | -1 |
|
||||
| subnet_pools | -1 |
|
||||
| subnets | 100 |
|
||||
| volumes | 10 |
|
||||
| volumes_lvmdriver-1 | -1 |
|
||||
+-----------------------+----------------------------------+
|
||||
|
||||
#. Show per-project quota values.
|
||||
|
||||
The :command:`openstack quota show` command reports the current
|
||||
set of quota limits. Administrators can provide the project ID of a
|
||||
specific project with the :command:`openstack quota show` command
|
||||
to view quotas for the specific project. If per-project quota
|
||||
limits are not enabled for the project, the command shows
|
||||
the default set of quotas:
|
||||
|
||||
.. note::
|
||||
|
||||
Additional quotas added in the Mitaka release include ``security_group``,
|
||||
``security_group_rule``, ``subnet``, and ``subnetpool``.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack quota show e436339c7f9c476cb3120cf3b9667377
|
||||
+-----------------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+----------------------------------+
|
||||
| backup-gigabytes | 1000 |
|
||||
| backups | 10 |
|
||||
| cores | 20 |
|
||||
| fixed-ips | -1 |
|
||||
| floating-ips | 50 |
|
||||
| gigabytes | 1000 |
|
||||
| gigabytes_lvmdriver-1 | -1 |
|
||||
| health_monitors | None |
|
||||
| injected-file-size | 10240 |
|
||||
| injected-files | 5 |
|
||||
| injected-path-size | 255 |
|
||||
| instances | 10 |
|
||||
| key-pairs | 100 |
|
||||
| l7_policies | None |
|
||||
| listeners | None |
|
||||
| load_balancers | None |
|
||||
| location | None |
|
||||
| name | None |
|
||||
| networks | 100 |
|
||||
| per-volume-gigabytes | -1 |
|
||||
| pools | None |
|
||||
| ports | 500 |
|
||||
| project | e436339c7f9c476cb3120cf3b9667377 |
|
||||
| project_id | None |
|
||||
| properties | 128 |
|
||||
| ram | 51200 |
|
||||
| rbac_policies | 10 |
|
||||
| routers | 10 |
|
||||
| secgroup-rules | 100 |
|
||||
| secgroups | 10 |
|
||||
| server-group-members | 10 |
|
||||
| server-groups | 10 |
|
||||
| snapshots | 10 |
|
||||
| snapshots_lvmdriver-1 | -1 |
|
||||
| subnet_pools | -1 |
|
||||
| subnets | 100 |
|
||||
| volumes | 10 |
|
||||
| volumes_lvmdriver-1 | -1 |
|
||||
+-----------------------+----------------------------------+
|
||||
|
||||
#. Update quota values for a specified project.
|
||||
|
||||
Use the :command:`openstack quota set` command to
|
||||
update a quota for a specified project.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack quota set --networks 5 e436339c7f9c476cb3120cf3b9667377
|
||||
$ openstack quota show e436339c7f9c476cb3120cf3b9667377
|
||||
+-----------------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+----------------------------------+
|
||||
| backup-gigabytes | 1000 |
|
||||
| backups | 10 |
|
||||
| cores | 20 |
|
||||
| fixed-ips | -1 |
|
||||
| floating-ips | 50 |
|
||||
| gigabytes | 1000 |
|
||||
| gigabytes_lvmdriver-1 | -1 |
|
||||
| health_monitors | None |
|
||||
| injected-file-size | 10240 |
|
||||
| injected-files | 5 |
|
||||
| injected-path-size | 255 |
|
||||
| instances | 10 |
|
||||
| key-pairs | 100 |
|
||||
| l7_policies | None |
|
||||
| listeners | None |
|
||||
| load_balancers | None |
|
||||
| location | None |
|
||||
| name | None |
|
||||
| networks | 5 |
|
||||
| per-volume-gigabytes | -1 |
|
||||
| pools | None |
|
||||
| ports | 500 |
|
||||
| project | e436339c7f9c476cb3120cf3b9667377 |
|
||||
| project_id | None |
|
||||
| properties | 128 |
|
||||
| ram | 51200 |
|
||||
| rbac_policies | 10 |
|
||||
| routers | 10 |
|
||||
| secgroup-rules | 100 |
|
||||
| secgroups | 10 |
|
||||
| server-group-members | 10 |
|
||||
| server-groups | 10 |
|
||||
| snapshots | 10 |
|
||||
| snapshots_lvmdriver-1 | -1 |
|
||||
| subnet_pools | -1 |
|
||||
| subnets | 100 |
|
||||
| volumes | 10 |
|
||||
| volumes_lvmdriver-1 | -1 |
|
||||
+-----------------------+----------------------------------+
|
||||
|
||||
You can update quotas for multiple resources through one
|
||||
command.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack quota set --subnets 5 --ports 20 e436339c7f9c476cb3120cf3b9667377
|
||||
$ openstack quota show e436339c7f9c476cb3120cf3b9667377
|
||||
+-----------------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+----------------------------------+
|
||||
| backup-gigabytes | 1000 |
|
||||
| backups | 10 |
|
||||
| cores | 20 |
|
||||
| fixed-ips | -1 |
|
||||
| floating-ips | 50 |
|
||||
| gigabytes | 1000 |
|
||||
| gigabytes_lvmdriver-1 | -1 |
|
||||
| health_monitors | None |
|
||||
| injected-file-size | 10240 |
|
||||
| injected-files | 5 |
|
||||
| injected-path-size | 255 |
|
||||
| instances | 10 |
|
||||
| key-pairs | 100 |
|
||||
| l7_policies | None |
|
||||
| listeners | None |
|
||||
| load_balancers | None |
|
||||
| location | None |
|
||||
| name | None |
|
||||
| networks | 5 |
|
||||
| per-volume-gigabytes | -1 |
|
||||
| pools | None |
|
||||
| ports | 50 |
|
||||
| project | e436339c7f9c476cb3120cf3b9667377 |
|
||||
| project_id | None |
|
||||
| properties | 128 |
|
||||
| ram | 51200 |
|
||||
| rbac_policies | 10 |
|
||||
| routers | 10 |
|
||||
| secgroup-rules | 100 |
|
||||
| secgroups | 10 |
|
||||
| server-group-members | 10 |
|
||||
| server-groups | 10 |
|
||||
| snapshots | 10 |
|
||||
| snapshots_lvmdriver-1 | -1 |
|
||||
| subnet_pools | -1 |
|
||||
| subnets | 10 |
|
||||
| volumes | 10 |
|
||||
| volumes_lvmdriver-1 | -1 |
|
||||
+-----------------------+----------------------------------+
|
||||
|
||||
To update the limits for an L3 resource such as, router
|
||||
or floating IP, you must define new values for the quotas
|
||||
after the ``--`` directive.
|
||||
|
||||
This example updates the limit of the number of floating
|
||||
IPs for the specified project.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack quota set --floating-ips 20 e436339c7f9c476cb3120cf3b9667377
|
||||
$ openstack quota show e436339c7f9c476cb3120cf3b9667377
|
||||
+-----------------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+----------------------------------+
|
||||
| backup-gigabytes | 1000 |
|
||||
| backups | 10 |
|
||||
| cores | 20 |
|
||||
| fixed-ips | -1 |
|
||||
| floating-ips | 20 |
|
||||
| gigabytes | 1000 |
|
||||
| gigabytes_lvmdriver-1 | -1 |
|
||||
| health_monitors | None |
|
||||
| injected-file-size | 10240 |
|
||||
| injected-files | 5 |
|
||||
| injected-path-size | 255 |
|
||||
| instances | 10 |
|
||||
| key-pairs | 100 |
|
||||
| l7_policies | None |
|
||||
| listeners | None |
|
||||
| load_balancers | None |
|
||||
| location | None |
|
||||
| name | None |
|
||||
| networks | 5 |
|
||||
| per-volume-gigabytes | -1 |
|
||||
| pools | None |
|
||||
| ports | 500 |
|
||||
| project | e436339c7f9c476cb3120cf3b9667377 |
|
||||
| project_id | None |
|
||||
| properties | 128 |
|
||||
| ram | 51200 |
|
||||
| rbac_policies | 10 |
|
||||
| routers | 10 |
|
||||
| secgroup-rules | 100 |
|
||||
| secgroups | 10 |
|
||||
| server-group-members | 10 |
|
||||
| server-groups | 10 |
|
||||
| snapshots | 10 |
|
||||
| snapshots_lvmdriver-1 | -1 |
|
||||
| subnet_pools | -1 |
|
||||
| subnets | 100 |
|
||||
| volumes | 10 |
|
||||
| volumes_lvmdriver-1 | -1 |
|
||||
+-----------------------+----------------------------------+
|
||||
|
||||
You can update the limits of multiple resources by
|
||||
including L2 resources and L3 resource through one
|
||||
command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack quota set --networks 3 --subnets 3 --ports 3 \
|
||||
--floating-ips 3 --routers 3 e436339c7f9c476cb3120cf3b9667377
|
||||
$ openstack quota show e436339c7f9c476cb3120cf3b9667377
|
||||
+-----------------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+----------------------------------+
|
||||
| backup-gigabytes | 1000 |
|
||||
| backups | 10 |
|
||||
| cores | 20 |
|
||||
| fixed-ips | -1 |
|
||||
| floating-ips | 3 |
|
||||
| gigabytes | 1000 |
|
||||
| gigabytes_lvmdriver-1 | -1 |
|
||||
| health_monitors | None |
|
||||
| injected-file-size | 10240 |
|
||||
| injected-files | 5 |
|
||||
| injected-path-size | 255 |
|
||||
| instances | 10 |
|
||||
| key-pairs | 100 |
|
||||
| l7_policies | None |
|
||||
| listeners | None |
|
||||
| load_balancers | None |
|
||||
| location | None |
|
||||
| name | None |
|
||||
| networks | 3 |
|
||||
| per-volume-gigabytes | -1 |
|
||||
| pools | None |
|
||||
| ports | 3 |
|
||||
| project | e436339c7f9c476cb3120cf3b9667377 |
|
||||
| project_id | None |
|
||||
| properties | 128 |
|
||||
| ram | 51200 |
|
||||
| rbac_policies | 10 |
|
||||
| routers | 10 |
|
||||
| secgroup-rules | 100 |
|
||||
| secgroups | 10 |
|
||||
| server-group-members | 10 |
|
||||
| server-groups | 10 |
|
||||
| snapshots | 10 |
|
||||
| snapshots_lvmdriver-1 | -1 |
|
||||
| subnet_pools | -1 |
|
||||
| subnets | 3 |
|
||||
| volumes | 10 |
|
||||
| volumes_lvmdriver-1 | -1 |
|
||||
+-----------------------+----------------------------------+
|
||||
|
||||
#. Delete per-project quota values.
|
||||
|
||||
To clear per-project quota limits, use the
|
||||
:command:`neutron quota-delete` command.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron quota-delete --tenant_id e436339c7f9c476cb3120cf3b9667377
|
||||
Deleted quota: e436339c7f9c476cb3120cf3b9667377
|
||||
|
||||
After you run this command, you can see that quota
|
||||
values for the project are reset to the default values.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack quota show e436339c7f9c476cb3120cf3b9667377
|
||||
+-----------------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+----------------------------------+
|
||||
| backup-gigabytes | 1000 |
|
||||
| backups | 10 |
|
||||
| cores | 20 |
|
||||
| fixed-ips | -1 |
|
||||
| floating-ips | 50 |
|
||||
| gigabytes | 1000 |
|
||||
| gigabytes_lvmdriver-1 | -1 |
|
||||
| health_monitors | None |
|
||||
| injected-file-size | 10240 |
|
||||
| injected-files | 5 |
|
||||
| injected-path-size | 255 |
|
||||
| instances | 10 |
|
||||
| key-pairs | 100 |
|
||||
| l7_policies | None |
|
||||
| listeners | None |
|
||||
| load_balancers | None |
|
||||
| location | None |
|
||||
| name | None |
|
||||
| networks | 100 |
|
||||
| per-volume-gigabytes | -1 |
|
||||
| pools | None |
|
||||
| ports | 500 |
|
||||
| project | e436339c7f9c476cb3120cf3b9667377 |
|
||||
| project_id | None |
|
||||
| properties | 128 |
|
||||
| ram | 51200 |
|
||||
| rbac_policies | 10 |
|
||||
| routers | 10 |
|
||||
| secgroup-rules | 100 |
|
||||
| secgroups | 10 |
|
||||
| server-group-members | 10 |
|
||||
| server-groups | 10 |
|
||||
| snapshots | 10 |
|
||||
| snapshots_lvmdriver-1 | -1 |
|
||||
| subnet_pools | -1 |
|
||||
| subnets | 100 |
|
||||
| volumes | 10 |
|
||||
| volumes_lvmdriver-1 | -1 |
|
||||
+-----------------------+----------------------------------+
|
||||
|
||||
.. note::
|
||||
|
||||
Listing defualt quotas with the OpenStack command line client will
|
||||
provide all quotas for networking and other services. Previously,
|
||||
the :command:`neutron quota-show --tenant_id` would list only networking
|
||||
quotas.
|
@ -1,50 +0,0 @@
|
||||
==================
|
||||
Evacuate instances
|
||||
==================
|
||||
|
||||
If a hardware malfunction or other error causes a cloud compute node to fail,
|
||||
you can evacuate instances to make them available again. You can optionally
|
||||
include the target host on the :command:`nova evacuate` command. If you omit
|
||||
the host, the scheduler chooses the target host.
|
||||
|
||||
To preserve user data on the server disk, configure shared storage on the
|
||||
target host. When you evacuate the instance, Compute detects whether shared
|
||||
storage is available on the target host. Also, you must validate that the
|
||||
current VM host is not operational. Otherwise, the evacuation fails.
|
||||
|
||||
#. To find a host for the evacuated instance, list all hosts:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack host list
|
||||
|
||||
#. Evacuate the instance. You can use the ``--password PWD`` option
|
||||
to pass the instance password to the command. If you do not specify a
|
||||
password, the command generates and prints one after it finishes
|
||||
successfully. The following command evacuates a server from a failed host
|
||||
to HOST_B.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova evacuate EVACUATED_SERVER_NAME HOST_B
|
||||
|
||||
The command rebuilds the instance from the original image or volume and
|
||||
returns a password. The command preserves the original configuration, which
|
||||
includes the instance ID, name, uid, IP address, and so on.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
+-----------+--------------+
|
||||
| Property | Value |
|
||||
+-----------+--------------+
|
||||
| adminPass | kRAJpErnT4xZ |
|
||||
+-----------+--------------+
|
||||
|
||||
#. To preserve the user disk data on the evacuated server, deploy Compute
|
||||
with a shared file system. To configure your system, see
|
||||
:ref:`section_configuring-compute-migrations`.
|
||||
The following example does not change the password.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova evacuate EVACUATED_SERVER_NAME HOST_B --on-shared-storage
|
@ -1,248 +0,0 @@
|
||||
=======================
|
||||
Manage project security
|
||||
=======================
|
||||
|
||||
Security groups are sets of IP filter rules that are applied to all
|
||||
project instances, which define networking access to the instance. Group
|
||||
rules are project specific; project members can edit the default rules
|
||||
for their group and add new rule sets.
|
||||
|
||||
All projects have a ``default`` security group which is applied to any
|
||||
instance that has no other defined security group. Unless you change the
|
||||
default, this security group denies all incoming traffic and allows only
|
||||
outgoing traffic to your instance.
|
||||
|
||||
You can use the ``allow_same_net_traffic`` option in the
|
||||
``/etc/nova/nova.conf`` file to globally control whether the rules apply
|
||||
to hosts which share a network.
|
||||
|
||||
If set to:
|
||||
|
||||
- ``True`` (default), hosts on the same subnet are not filtered and are
|
||||
allowed to pass all types of traffic between them. On a flat network,
|
||||
this allows all instances from all projects unfiltered communication.
|
||||
With VLAN networking, this allows access between instances within the
|
||||
same project. You can also simulate this setting by configuring the
|
||||
default security group to allow all traffic from the subnet.
|
||||
|
||||
- ``False``, security groups are enforced for all connections.
|
||||
|
||||
Additionally, the number of maximum rules per security group is
|
||||
controlled by the ``security_group_rules`` and the number of allowed
|
||||
security groups per project is controlled by the ``security_groups``
|
||||
quota (see :ref:`manage-quotas`).
|
||||
|
||||
List and view current security groups
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
From the command-line you can get a list of security groups for the
|
||||
project, using the :command:`openstack` and :command:`nova` commands:
|
||||
|
||||
#. Ensure your system variables are set for the user and project for
|
||||
which you are checking security group rules. For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
export OS_USERNAME=demo00
|
||||
export OS_TENANT_NAME=tenant01
|
||||
|
||||
#. Output security groups, as follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack security group list
|
||||
+--------------------------------------+---------+-------------+
|
||||
| Id | Name | Description |
|
||||
+--------------------------------------+---------+-------------+
|
||||
| 73580272-d8fa-4927-bd55-c85e43bc4877 | default | default |
|
||||
| 6777138a-deb7-4f10-8236-6400e7aff5b0 | open | all ports |
|
||||
+--------------------------------------+---------+-------------+
|
||||
|
||||
#. View the details of a group, as follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack security group rule list GROUPNAME
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack security group rule list open
|
||||
+--------------------------------------+-------------+-----------+-----------------+-----------------------+
|
||||
| ID | IP Protocol | IP Range | Port Range | Remote Security Group |
|
||||
+--------------------------------------+-------------+-----------+-----------------+-----------------------+
|
||||
| 353d0611-3f67-4848-8222-a92adbdb5d3a | udp | 0.0.0.0/0 | 1:65535 | None |
|
||||
| 63536865-e5b6-4df1-bac5-ca6d97d8f54d | tcp | 0.0.0.0/0 | 1:65535 | None |
|
||||
+--------------------------------------+-------------+-----------+-----------------+-----------------------+
|
||||
|
||||
These rules are allow type rules as the default is deny. The first
|
||||
column is the IP protocol (one of icmp, tcp, or udp). The second and
|
||||
third columns specify the affected port range. The third column
|
||||
specifies the IP range in CIDR format. This example shows the full
|
||||
port range for all protocols allowed from all IPs.
|
||||
|
||||
Create a security group
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
When adding a new security group, you should pick a descriptive but
|
||||
brief name. This name shows up in brief descriptions of the instances
|
||||
that use it where the longer description field often does not. For
|
||||
example, seeing that an instance is using security group "http" is much
|
||||
easier to understand than "bobs\_group" or "secgrp1".
|
||||
|
||||
#. Ensure your system variables are set for the user and project for
|
||||
which you are creating security group rules.
|
||||
|
||||
#. Add the new security group, as follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack security group create GroupName --description Description
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack security group create global_http --description "Allows Web traffic anywhere on the Internet."
|
||||
+-----------------+--------------------------------------------------------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------+--------------------------------------------------------------------------------------------------------------------------+
|
||||
| created_at | 2016-11-03T13:50:53Z |
|
||||
| description | Allows Web traffic anywhere on the Internet. |
|
||||
| headers | |
|
||||
| id | c0b92b20-4575-432a-b4a9-eaf2ad53f696 |
|
||||
| name | global_http |
|
||||
| project_id | 5669caad86a04256994cdf755df4d3c1 |
|
||||
| project_id | 5669caad86a04256994cdf755df4d3c1 |
|
||||
| revision_number | 1 |
|
||||
| rules | created_at='2016-11-03T13:50:53Z', direction='egress', ethertype='IPv4', id='4d8cec94-e0ee-4c20-9f56-8fb67c21e4df', |
|
||||
| | project_id='5669caad86a04256994cdf755df4d3c1', revision_number='1', updated_at='2016-11-03T13:50:53Z' |
|
||||
| | created_at='2016-11-03T13:50:53Z', direction='egress', ethertype='IPv6', id='31be2ad1-be14-4aef-9492-ecebede2cf12', |
|
||||
| | project_id='5669caad86a04256994cdf755df4d3c1', revision_number='1', updated_at='2016-11-03T13:50:53Z' |
|
||||
| updated_at | 2016-11-03T13:50:53Z |
|
||||
+-----------------+--------------------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
#. Add a new group rule, as follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack security group rule create SEC_GROUP_NAME --protocol PROTOCOL --dst-port FROM_PORT:TO_PORT --remote-ip CIDR
|
||||
|
||||
The arguments are positional, and the ``from-port`` and ``to-port``
|
||||
arguments specify the local port range connections are allowed to
|
||||
access, not the source and destination ports of the connection. For
|
||||
example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack security group rule create global_http --protocol tcp --dst-port 80:80 --remote-ip 0.0.0.0/0
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| created_at | 2016-11-06T14:02:00Z |
|
||||
| description | |
|
||||
| direction | ingress |
|
||||
| ethertype | IPv4 |
|
||||
| headers | |
|
||||
| id | 2ba06233-d5c8-43eb-93a9-8eaa94bc9eb5 |
|
||||
| port_range_max | 80 |
|
||||
| port_range_min | 80 |
|
||||
| project_id | 5669caad86a04256994cdf755df4d3c1 |
|
||||
| project_id | 5669caad86a04256994cdf755df4d3c1 |
|
||||
| protocol | tcp |
|
||||
| remote_group_id | None |
|
||||
| remote_ip_prefix | 0.0.0.0/0 |
|
||||
| revision_number | 1 |
|
||||
| security_group_id | c0b92b20-4575-432a-b4a9-eaf2ad53f696 |
|
||||
| updated_at | 2016-11-06T14:02:00Z |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
You can create complex rule sets by creating additional rules. For
|
||||
example, if you want to pass both HTTP and HTTPS traffic, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack security group rule create global_http --protocol tcp --dst-port 443:443 --remote-ip 0.0.0.0/0
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| created_at | 2016-11-06T14:09:20Z |
|
||||
| description | |
|
||||
| direction | ingress |
|
||||
| ethertype | IPv4 |
|
||||
| headers | |
|
||||
| id | 821c3ef6-9b21-426b-be5b-c8a94c2a839c |
|
||||
| port_range_max | 443 |
|
||||
| port_range_min | 443 |
|
||||
| project_id | 5669caad86a04256994cdf755df4d3c1 |
|
||||
| project_id | 5669caad86a04256994cdf755df4d3c1 |
|
||||
| protocol | tcp |
|
||||
| remote_group_id | None |
|
||||
| remote_ip_prefix | 0.0.0.0/0 |
|
||||
| revision_number | 1 |
|
||||
| security_group_id | c0b92b20-4575-432a-b4a9-eaf2ad53f696 |
|
||||
| updated_at | 2016-11-06T14:09:20Z |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
Despite only outputting the newly added rule, this operation is
|
||||
additive (both rules are created and enforced).
|
||||
|
||||
#. View all rules for the new security group, as follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack security group rule list global_http
|
||||
+--------------------------------------+-------------+-----------+-----------------+-----------------------+
|
||||
| ID | IP Protocol | IP Range | Port Range | Remote Security Group |
|
||||
+--------------------------------------+-------------+-----------+-----------------+-----------------------+
|
||||
| 353d0611-3f67-4848-8222-a92adbdb5d3a | tcp | 0.0.0.0/0 | 80:80 | None |
|
||||
| 63536865-e5b6-4df1-bac5-ca6d97d8f54d | tcp | 0.0.0.0/0 | 443:443 | None |
|
||||
+--------------------------------------+-------------+-----------+-----------------+-----------------------+
|
||||
|
||||
Delete a security group
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Ensure your system variables are set for the user and project for
|
||||
which you are deleting a security group.
|
||||
|
||||
#. Delete the new security group, as follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack security group delete GROUPNAME
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack security group delete global_http
|
||||
|
||||
Create security group rules for a cluster of instances
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Source Groups are a special, dynamic way of defining the CIDR of allowed
|
||||
sources. The user specifies a Source Group (Security Group name), and
|
||||
all the user's other Instances using the specified Source Group are
|
||||
selected dynamically. This alleviates the need for individual rules to
|
||||
allow each new member of the cluster.
|
||||
|
||||
#. Make sure to set the system variables for the user and project for
|
||||
which you are creating a security group rule.
|
||||
|
||||
#. Add a source group, as follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack security group rule create secGroupName --remote-group source-group \
|
||||
--protocol ip-protocol --dst-port from-port:to-port
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack security group rule create cluster --remote-group global_http \
|
||||
--protocol tcp --dst-port 22:22
|
||||
|
||||
The ``cluster`` rule allows SSH access from any other instance that
|
||||
uses the ``global_http`` group.
|
@ -1,73 +0,0 @@
|
||||
=======================
|
||||
Manage Compute services
|
||||
=======================
|
||||
|
||||
You can enable and disable Compute services. The following
|
||||
examples disable and enable the ``nova-compute`` service.
|
||||
|
||||
|
||||
#. List the Compute services:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack compute service list
|
||||
+----+--------------+------------+----------+---------+-------+--------------+
|
||||
| ID | Binary | Host | Zone | Status | State | Updated At |
|
||||
+----+--------------+------------+----------+---------+-------+--------------+
|
||||
| 4 | nova- | controller | internal | enabled | up | 2016-12-20T0 |
|
||||
| | consoleauth | | | | | 0:44:48.0000 |
|
||||
| | | | | | | 00 |
|
||||
| 5 | nova- | controller | internal | enabled | up | 2016-12-20T0 |
|
||||
| | scheduler | | | | | 0:44:48.0000 |
|
||||
| | | | | | | 00 |
|
||||
| 6 | nova- | controller | internal | enabled | up | 2016-12-20T0 |
|
||||
| | conductor | | | | | 0:44:54.0000 |
|
||||
| | | | | | | 00 |
|
||||
| 9 | nova-compute | compute | nova | enabled | up | 2016-10-21T0 |
|
||||
| | | | | | | 2:35:03.0000 |
|
||||
| | | | | | | 00 |
|
||||
+----+--------------+------------+----------+---------+-------+--------------+
|
||||
|
||||
#. Disable a nova service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack compute service set --disable --disable-reason trial log nova nova-compute
|
||||
+----------+--------------+----------+-------------------+
|
||||
| Host | Binary | Status | Disabled Reason |
|
||||
+----------+--------------+----------+-------------------+
|
||||
| compute | nova-compute | disabled | trial log |
|
||||
+----------+--------------+----------+-------------------+
|
||||
|
||||
#. Check the service list:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack compute service list
|
||||
+----+--------------+------------+----------+---------+-------+--------------+
|
||||
| ID | Binary | Host | Zone | Status | State | Updated At |
|
||||
+----+--------------+------------+----------+---------+-------+--------------+
|
||||
| 4 | nova- | controller | internal | enabled | up | 2016-12-20T0 |
|
||||
| | consoleauth | | | | | 0:44:48.0000 |
|
||||
| | | | | | | 00 |
|
||||
| 5 | nova- | controller | internal | enabled | up | 2016-12-20T0 |
|
||||
| | scheduler | | | | | 0:44:48.0000 |
|
||||
| | | | | | | 00 |
|
||||
| 6 | nova- | controller | internal | enabled | up | 2016-12-20T0 |
|
||||
| | conductor | | | | | 0:44:54.0000 |
|
||||
| | | | | | | 00 |
|
||||
| 9 | nova-compute | compute | nova | disabled| up | 2016-10-21T0 |
|
||||
| | | | | | | 2:35:03.0000 |
|
||||
| | | | | | | 00 |
|
||||
+----+--------------+------------+----------+---------+-------+--------------+
|
||||
|
||||
#. Enable the service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack compute service set --enable nova nova-compute
|
||||
+----------+--------------+---------+
|
||||
| Host | Binary | Status |
|
||||
+----------+--------------+---------+
|
||||
| compute | nova-compute | enabled |
|
||||
+----------+--------------+---------+
|
@ -1,24 +0,0 @@
|
||||
=============================================
|
||||
Consider NUMA topology when booting instances
|
||||
=============================================
|
||||
|
||||
NUMA topology can exist on both the physical hardware of the host, and the
|
||||
virtual hardware of the instance. OpenStack Compute uses libvirt to tune
|
||||
instances to take advantage of NUMA topologies. The libvirt driver boot
|
||||
process looks at the NUMA topology field of both the instance and the host it
|
||||
is being booted on, and uses that information to generate an appropriate
|
||||
configuration.
|
||||
|
||||
If the host is NUMA capable, but the instance has not requested a NUMA
|
||||
topology, Compute attempts to pack the instance into a single cell.
|
||||
If this fails, though, Compute will not continue to try.
|
||||
|
||||
If the host is NUMA capable, and the instance has requested a specific NUMA
|
||||
topology, Compute will try to pin the vCPUs of different NUMA cells
|
||||
on the instance to the corresponding NUMA cells on the host. It will also
|
||||
expose the NUMA topology of the instance to the guest OS.
|
||||
|
||||
If you want Compute to pin a particular vCPU as part of this process,
|
||||
set the ``vcpu_pin_set`` parameter in the ``nova.conf`` configuration
|
||||
file. For more information about the ``vcpu_pin_set`` parameter, see the
|
||||
Configuration Reference Guide.
|
@ -1,76 +0,0 @@
|
||||
=========================================
|
||||
Select hosts where instances are launched
|
||||
=========================================
|
||||
|
||||
With the appropriate permissions, you can select which
|
||||
host instances are launched on and which roles can boot instances
|
||||
on this host.
|
||||
|
||||
#. To select the host where instances are launched, use
|
||||
the ``--availability-zone ZONE:HOST:NODE`` parameter on the
|
||||
:command:`openstack server create` command.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server create --image IMAGE --flavor m1.tiny \
|
||||
--key-name KEY --availability-zone ZONE:HOST:NODE \
|
||||
--nic net-id=UUID SERVER
|
||||
|
||||
|
||||
.. note::
|
||||
HOST and NODE are optional parameters. In such cases,
|
||||
use the ``--availability-zone ZONE::NODE``,
|
||||
``--availability-zone ZONE:HOST`` or
|
||||
``--availability-zone ZONE``.
|
||||
|
||||
|
||||
#. To specify which roles can launch an instance on a
|
||||
specified host, enable the ``create:forced_host`` option in
|
||||
the ``policy.json`` file. By default, this option is
|
||||
enabled for only the admin role. If you see ``Forbidden (HTTP 403)``
|
||||
in return, then you are not using admin credentials.
|
||||
|
||||
|
||||
#. To view the list of valid zones, use the
|
||||
:command:`openstack availability zone list` command.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack availability zone list
|
||||
+-----------+-------------+
|
||||
| Zone Name | Zone Status |
|
||||
+-----------+-------------+
|
||||
| zone1 | available |
|
||||
| zone2 | available |
|
||||
+-----------+-------------+
|
||||
|
||||
|
||||
#. To view the list of valid compute hosts, use the
|
||||
:command:`openstack host list` command.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack host list
|
||||
+----------------+-------------+----------+
|
||||
| Host Name | Service | Zone |
|
||||
+----------------+-------------+----------+
|
||||
| compute01 | compute | nova |
|
||||
| compute02 | compute | nova |
|
||||
+----------------+-------------+----------+
|
||||
|
||||
|
||||
#. To view the list of valid compute nodes, use the
|
||||
:command:`openstack hypervisor list` command.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack hypervisor list
|
||||
+----+---------------------+
|
||||
| ID | Hypervisor Hostname |
|
||||
+----+---------------------+
|
||||
| 1 | server2 |
|
||||
| 2 | server3 |
|
||||
| 3 | server4 |
|
||||
+----+---------------------+
|
@ -1,78 +0,0 @@
|
||||
.. _cli-os-migrate-cfg-ssh:
|
||||
|
||||
===================================
|
||||
Configure SSH between compute nodes
|
||||
===================================
|
||||
|
||||
If you are resizing or migrating an instance
|
||||
between hypervisors, you might encounter an
|
||||
SSH (Permission denied) error. Ensure that
|
||||
each node is configured with SSH key authentication
|
||||
so that the Compute service can use SSH
|
||||
to move disks to other nodes.
|
||||
|
||||
To share a key pair between compute nodes,
|
||||
complete the following steps:
|
||||
|
||||
#. On the first node, obtain a key pair
|
||||
(public key and private key). Use the root key
|
||||
that is in the ``/root/.ssh/id_rsa`` and
|
||||
``/root/.ssh/id_ras.pub`` directories or
|
||||
generate a new key pair.
|
||||
|
||||
#. Run :command:`setenforce 0` to put SELinux into
|
||||
permissive mode.
|
||||
|
||||
#. Enable login abilities for the nova user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# usermod -s /bin/bash nova
|
||||
|
||||
Switch to the nova account.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# su nova
|
||||
|
||||
#. As root, create the folder that is needed by SSH and place
|
||||
the private key that you obtained in step 1 into this
|
||||
folder:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
mkdir -p /var/lib/nova/.ssh
|
||||
cp <private key> /var/lib/nova/.ssh/id_rsa
|
||||
echo 'StrictHostKeyChecking no' >> /var/lib/nova/.ssh/config
|
||||
chmod 600 /var/lib/nova/.ssh/id_rsa /var/lib/nova/.ssh/authorized_keys
|
||||
|
||||
#. Repeat steps 2-4 on each node.
|
||||
|
||||
.. note::
|
||||
|
||||
The nodes must share the same key pair, so do not generate
|
||||
a new key pair for any subsequent nodes.
|
||||
|
||||
#. From the first node, where you created the SSH key, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
ssh-copy-id -i <pub key> nova@remote-host
|
||||
|
||||
This command installs your public key in a remote machine's ``authorized_keys`` folder.
|
||||
|
||||
#. Ensure that the nova user can now log in to each node without
|
||||
using a password:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# su nova
|
||||
$ ssh *computeNodeAddress*
|
||||
$ exit
|
||||
|
||||
#. As root on each node, restart both libvirt and the Compute services:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl restart libvirtd.service
|
||||
# systemctl restart openstack-nova-compute.service
|
@ -1,84 +0,0 @@
|
||||
=================================================
|
||||
Migrate a single instance to another compute host
|
||||
=================================================
|
||||
|
||||
When you want to move an instance from one compute host to another,
|
||||
you can use the :command:`openstack server migrate` command. The scheduler
|
||||
chooses the destination compute host based on its settings. This process does
|
||||
not assume that the instance has shared storage available on the
|
||||
target host. If you are using SSH tunneling, you must ensure that
|
||||
each node is configured with SSH key authentication so that the
|
||||
Compute service can use SSH to move disks to other nodes.
|
||||
For more information, see :ref:`cli-os-migrate-cfg-ssh`.
|
||||
|
||||
#. To list the VMs you want to migrate, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server list
|
||||
|
||||
#. Use the :command:`openstack server migrate` command.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server migrate --live TARGET_HOST VM_INSTANCE
|
||||
|
||||
#. To migrate an instance and watch the status, use this example script:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
#!/bin/bash
|
||||
|
||||
# Provide usage
|
||||
usage() {
|
||||
echo "Usage: $0 VM_ID"
|
||||
exit 1
|
||||
}
|
||||
|
||||
[[ $# -eq 0 ]] && usage
|
||||
|
||||
# Migrate the VM to an alternate hypervisor
|
||||
echo -n "Migrating instance to alternate host"
|
||||
VM_ID=$1
|
||||
openstack server migrate $VM_ID
|
||||
VM_OUTPUT=$(openstack server show $VM_ID)
|
||||
VM_STATUS=$(echo "$VM_OUTPUT" | grep status | awk '{print $4}')
|
||||
while [[ "$VM_STATUS" != "VERIFY_RESIZE" ]]; do
|
||||
echo -n "."
|
||||
sleep 2
|
||||
VM_OUTPUT=$(openstack server show $VM_ID)
|
||||
VM_STATUS=$(echo "$VM_OUTPUT" | grep status | awk '{print $4}')
|
||||
done
|
||||
nova resize-confirm $VM_ID
|
||||
echo " instance migrated and resized."
|
||||
echo;
|
||||
|
||||
# Show the details for the VM
|
||||
echo "Updated instance details:"
|
||||
openstack server show $VM_ID
|
||||
|
||||
# Pause to allow users to examine VM details
|
||||
read -p "Pausing, press <enter> to exit."
|
||||
|
||||
.. note::
|
||||
|
||||
If you see the following error, it means you are either
|
||||
running the command with the wrong credentials,
|
||||
such as a non-admin user, or the ``policy.json``
|
||||
file prevents migration for your user:
|
||||
|
||||
``ERROR (Forbidden): Policy doesn't allow compute_extension:admin_actions:migrate
|
||||
to be performed. (HTTP 403)``
|
||||
|
||||
.. note::
|
||||
|
||||
If you see the following error, similar to this message, SSH
|
||||
tunneling was not set up between the compute nodes:
|
||||
|
||||
``ProcessExecutionError: Unexpected error while running command.``
|
||||
|
||||
``Stderr: u Host key verification failed.\r\n``
|
||||
|
||||
The instance is booted from a new host, but preserves its configuration
|
||||
including instance ID, name, IP address, any metadata, and other
|
||||
properties.
|
@ -1,298 +0,0 @@
|
||||
=============================
|
||||
Manage Compute service quotas
|
||||
=============================
|
||||
|
||||
As an administrative user, you can use the :command:`nova quota-*`
|
||||
commands, which are provided by the ``python-novaclient``
|
||||
package, to update the Compute service quotas for a specific project or
|
||||
project user, as well as update the quota defaults for a new project.
|
||||
|
||||
**Compute quota descriptions**
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 10 40
|
||||
|
||||
* - Quota name
|
||||
- Description
|
||||
* - cores
|
||||
- Number of instance cores (VCPUs) allowed per project.
|
||||
* - fixed-ips
|
||||
- Number of fixed IP addresses allowed per project. This number
|
||||
must be equal to or greater than the number of allowed
|
||||
instances.
|
||||
* - floating-ips
|
||||
- Number of floating IP addresses allowed per project.
|
||||
* - injected-file-content-bytes
|
||||
- Number of content bytes allowed per injected file.
|
||||
* - injected-file-path-bytes
|
||||
- Length of injected file path.
|
||||
* - injected-files
|
||||
- Number of injected files allowed per project.
|
||||
* - instances
|
||||
- Number of instances allowed per project.
|
||||
* - key-pairs
|
||||
- Number of key pairs allowed per user.
|
||||
* - metadata-items
|
||||
- Number of metadata items allowed per instance.
|
||||
* - ram
|
||||
- Megabytes of instance ram allowed per project.
|
||||
* - security-groups
|
||||
- Number of security groups per project.
|
||||
* - security-group-rules
|
||||
- Number of security group rules per project.
|
||||
* - server-groups
|
||||
- Number of server groups per project.
|
||||
* - server-group-members
|
||||
- Number of servers per server group.
|
||||
|
||||
View and update Compute quotas for a project
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To view and update default quota values
|
||||
---------------------------------------
|
||||
#. List all default quotas for all projects:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack quota show --default
|
||||
|
||||
+-----------------------------+-------+
|
||||
| Quota | Limit |
|
||||
+-----------------------------+-------+
|
||||
| instances | 10 |
|
||||
| cores | 20 |
|
||||
| ram | 51200 |
|
||||
| floating_ips | 10 |
|
||||
| fixed_ips | -1 |
|
||||
| metadata_items | 128 |
|
||||
| injected_files | 5 |
|
||||
| injected_file_content_bytes | 10240 |
|
||||
| injected_file_path_bytes | 255 |
|
||||
| key_pairs | 100 |
|
||||
| security_groups | 10 |
|
||||
| security_group_rules | 20 |
|
||||
| server_groups | 10 |
|
||||
| server_group_members | 10 |
|
||||
+-----------------------------+-------+
|
||||
|
||||
#. Update a default value for a new project, for example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack quota set --instances 15 default
|
||||
|
||||
To view quota values for an existing project
|
||||
--------------------------------------------
|
||||
|
||||
#. List the currently set quota values for a project:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack quota show PROJECT_NAME
|
||||
|
||||
+-----------------------------+-------+
|
||||
| Quota | Limit |
|
||||
+-----------------------------+-------+
|
||||
| instances | 10 |
|
||||
| cores | 20 |
|
||||
| ram | 51200 |
|
||||
| floating_ips | 10 |
|
||||
| fixed_ips | -1 |
|
||||
| metadata_items | 128 |
|
||||
| injected_files | 5 |
|
||||
| injected_file_content_bytes | 10240 |
|
||||
| injected_file_path_bytes | 255 |
|
||||
| key_pairs | 100 |
|
||||
| security_groups | 10 |
|
||||
| security_group_rules | 20 |
|
||||
| server_groups | 10 |
|
||||
| server_group_members | 10 |
|
||||
+-----------------------------+-------+
|
||||
|
||||
To update quota values for an existing project
|
||||
----------------------------------------------
|
||||
|
||||
#. Obtain the project ID.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ project=$(openstack project show -f value -c id PROJECT_NAME)
|
||||
|
||||
#. Update a particular quota value.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack quota set --QUOTA_NAME QUOTA_VALUE PROJECT_OR_CLASS
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack quota set --floating-ips 20 PROJECT_OR_CLASS
|
||||
$ openstack quota show PROJECT_NAME
|
||||
+-----------------------------+-------+
|
||||
| Quota | Limit |
|
||||
+-----------------------------+-------+
|
||||
| instances | 10 |
|
||||
| cores | 20 |
|
||||
| ram | 51200 |
|
||||
| floating_ips | 20 |
|
||||
| fixed_ips | -1 |
|
||||
| metadata_items | 128 |
|
||||
| injected_files | 5 |
|
||||
| injected_file_content_bytes | 10240 |
|
||||
| injected_file_path_bytes | 255 |
|
||||
| key_pairs | 100 |
|
||||
| security_groups | 10 |
|
||||
| security_group_rules | 20 |
|
||||
| server_groups | 10 |
|
||||
| server_group_members | 10 |
|
||||
+-----------------------------+-------+
|
||||
|
||||
.. note::
|
||||
|
||||
To view a list of options for the :command:`openstack quota set` command,
|
||||
run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack help quota set
|
||||
|
||||
View and update Compute quotas for a project user
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To view quota values for a project user
|
||||
---------------------------------------
|
||||
|
||||
#. Place the user ID in a usable variable.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ projectUser=$(openstack user show -f value -c id USER_NAME)
|
||||
|
||||
#. Place the user's project ID in a usable variable, as follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ project=$(openstack project show -f value -c id PROJECT_NAME)
|
||||
|
||||
#. List the currently set quota values for a project user.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova quota-show --user $projectUser --tenant $project
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova quota-show --user $projecUser --tenant $project
|
||||
+-----------------------------+-------+
|
||||
| Quota | Limit |
|
||||
+-----------------------------+-------+
|
||||
| instances | 10 |
|
||||
| cores | 20 |
|
||||
| ram | 51200 |
|
||||
| floating_ips | 20 |
|
||||
| fixed_ips | -1 |
|
||||
| metadata_items | 128 |
|
||||
| injected_files | 5 |
|
||||
| injected_file_content_bytes | 10240 |
|
||||
| injected_file_path_bytes | 255 |
|
||||
| key_pairs | 100 |
|
||||
| security_groups | 10 |
|
||||
| security_group_rules | 20 |
|
||||
| server_groups | 10 |
|
||||
| server_group_members | 10 |
|
||||
+-----------------------------+-------+
|
||||
|
||||
To update quota values for a project user
|
||||
-----------------------------------------
|
||||
|
||||
#. Place the user ID in a usable variable.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ projectUser=$(openstack user show -f value -c id USER_NAME)
|
||||
|
||||
#. Place the user's project ID in a usable variable, as follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ project=$(openstack project show -f value -c id PROJECT_NAME)
|
||||
|
||||
#. Update a particular quota value, as follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova quota-update --user $projectUser --QUOTA_NAME QUOTA_VALUE $project
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova quota-update --user $projectUser --floating-ips 12 $project
|
||||
$ nova quota-show --user $projectUser --tenant $project
|
||||
+-----------------------------+-------+
|
||||
| Quota | Limit |
|
||||
+-----------------------------+-------+
|
||||
| instances | 10 |
|
||||
| cores | 20 |
|
||||
| ram | 51200 |
|
||||
| floating_ips | 12 |
|
||||
| fixed_ips | -1 |
|
||||
| metadata_items | 128 |
|
||||
| injected_files | 5 |
|
||||
| injected_file_content_bytes | 10240 |
|
||||
| injected_file_path_bytes | 255 |
|
||||
| key_pairs | 100 |
|
||||
| security_groups | 10 |
|
||||
| security_group_rules | 20 |
|
||||
| server_groups | 10 |
|
||||
| server_group_members | 10 |
|
||||
+-----------------------------+-------+
|
||||
|
||||
.. note::
|
||||
|
||||
To view a list of options for the :command:`nova quota-update` command,
|
||||
run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova help quota-update
|
||||
|
||||
To display the current quota usage for a project user
|
||||
-----------------------------------------------------
|
||||
|
||||
Use :command:`nova limits` to get a list of the
|
||||
current quota values and the current quota usage:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova limits --tenant PROJET_NAME
|
||||
|
||||
+------+-----+-------+--------+------+----------------+
|
||||
| Verb | URI | Value | Remain | Unit | Next_Available |
|
||||
+------+-----+-------+--------+------+----------------+
|
||||
+------+-----+-------+--------+------+----------------+
|
||||
|
||||
+--------------------+------+-------+
|
||||
| Name | Used | Max |
|
||||
+--------------------+------+-------+
|
||||
| Cores | 0 | 20 |
|
||||
| Instances | 0 | 10 |
|
||||
| Keypairs | - | 100 |
|
||||
| Personality | - | 5 |
|
||||
| Personality Size | - | 10240 |
|
||||
| RAM | 0 | 51200 |
|
||||
| Server Meta | - | 128 |
|
||||
| ServerGroupMembers | - | 10 |
|
||||
| ServerGroups | 0 | 10 |
|
||||
+--------------------+------+-------+
|
||||
|
||||
.. note::
|
||||
|
||||
The :command:`nova limits` command generates an empty
|
||||
table as a result of the Compute API, which prints an
|
||||
empty list for backward compatibility purposes.
|
@ -1,61 +0,0 @@
|
||||
.. _manage-quotas:
|
||||
|
||||
=============
|
||||
Manage quotas
|
||||
=============
|
||||
|
||||
To prevent system capacities from being exhausted without
|
||||
notification, you can set up quotas. Quotas are operational
|
||||
limits. For example, the number of gigabytes allowed for each
|
||||
project can be controlled so that cloud resources are optimized.
|
||||
Quotas can be enforced at both the project
|
||||
and the project-user level.
|
||||
|
||||
Using the command-line interface, you can manage quotas for
|
||||
the OpenStack Compute service, the OpenStack Block Storage service,
|
||||
and the OpenStack Networking service.
|
||||
|
||||
The cloud operator typically changes default values because a
|
||||
project requires more than ten volumes or 1 TB on a compute
|
||||
node.
|
||||
|
||||
.. note::
|
||||
|
||||
To view all projects, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack project list
|
||||
+----------------------------------+----------+
|
||||
| ID | Name |
|
||||
+----------------------------------+----------+
|
||||
| e66d97ac1b704897853412fc8450f7b9 | admin |
|
||||
| bf4a37b885fe46bd86e999e50adad1d3 | services |
|
||||
| 21bd1c7c95234fd28f589b60903606fa | tenant01 |
|
||||
| f599c5cd1cba4125ae3d7caed08e288c | tenant02 |
|
||||
+----------------------------------+----------+
|
||||
|
||||
To display all current users for a project, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack user list --project PROJECT_NAME
|
||||
+----------------------------------+--------+
|
||||
| ID | Name |
|
||||
+----------------------------------+--------+
|
||||
| ea30aa434ab24a139b0e85125ec8a217 | demo00 |
|
||||
| 4f8113c1d838467cad0c2f337b3dfded | demo01 |
|
||||
+----------------------------------+--------+
|
||||
|
||||
Use :samp:`openstack quota show {PROJECT_NAME}` to list all quotas for a
|
||||
project.
|
||||
|
||||
Use :samp:`openstack quota set {PROJECT_NAME} {--parameters}` to set quota
|
||||
values.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
cli-set-compute-quotas.rst
|
||||
cli-cinder-quotas.rst
|
||||
cli-networking-advanced-quotas.rst
|
@ -1,22 +0,0 @@
|
||||
==============================
|
||||
OpenStack command-line clients
|
||||
==============================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
common/cli-overview.rst
|
||||
common/cli-install-openstack-command-line-clients.rst
|
||||
common/cli-discover-version-number-for-a-client.rst
|
||||
common/cli-set-environment-variables-using-openstack-rc.rst
|
||||
cli-manage-projects-users-and-roles.rst
|
||||
cli-nova-manage-projects-security.rst
|
||||
cli-manage-services.rst
|
||||
common/cli-manage-images.rst
|
||||
common/cli-manage-volumes.rst
|
||||
cli-manage-shares.rst
|
||||
cli-manage-flavors.rst
|
||||
cli-admin-manage-environment.rst
|
||||
cli-set-quotas.rst
|
||||
cli-analyzing-log-files-with-swift.rst
|
||||
cli-cinder-scheduling.rst
|
@ -1,62 +0,0 @@
|
||||
.. _admin-password-injection:
|
||||
|
||||
====================================
|
||||
Injecting the administrator password
|
||||
====================================
|
||||
|
||||
Compute can generate a random administrator (root) password and inject
|
||||
that password into an instance. If this feature is enabled, users can
|
||||
run :command:`ssh` to an instance without an :command:`ssh` keypair.
|
||||
The random password appears in the output of the
|
||||
:command:`openstack server create` command.
|
||||
You can also view and set the admin password from the dashboard.
|
||||
|
||||
**Password injection using the dashboard**
|
||||
|
||||
By default, the dashboard will display the ``admin`` password and allow
|
||||
the user to modify it.
|
||||
|
||||
If you do not want to support password injection, disable the password
|
||||
fields by editing the dashboard's ``local_settings.py`` file.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
OPENSTACK_HYPERVISOR_FEATURES = {
|
||||
...
|
||||
'can_set_password': False,
|
||||
}
|
||||
|
||||
**Password injection on libvirt-based hypervisors**
|
||||
|
||||
For hypervisors that use the libvirt back end (such as KVM, QEMU, and
|
||||
LXC), admin password injection is disabled by default. To enable it, set
|
||||
this option in ``/etc/nova/nova.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[libvirt]
|
||||
inject_password=true
|
||||
|
||||
When enabled, Compute will modify the password of the admin account by
|
||||
editing the ``/etc/shadow`` file inside the virtual machine instance.
|
||||
|
||||
.. note::
|
||||
|
||||
Users can only use :command:`ssh` to access the instance by using the admin
|
||||
password if the virtual machine image is a Linux distribution, and it has
|
||||
been configured to allow users to use :command:`ssh` as the root user. This
|
||||
is not the case for `Ubuntu cloud images <http://uec-images.ubuntu.com>`_
|
||||
which, by default, does not allow users to use :command:`ssh` to access the
|
||||
root account.
|
||||
|
||||
**Password injection and XenAPI (XenServer/XCP)**
|
||||
|
||||
When using the XenAPI hypervisor back end, Compute uses the XenAPI agent
|
||||
to inject passwords into guests. The virtual machine image must be
|
||||
configured with the agent for password injection to work.
|
||||
|
||||
**Password injection and Windows images (all hypervisors)**
|
||||
|
||||
For Windows virtual machines, configure the Windows image to retrieve
|
||||
the admin password on boot by installing an agent such as
|
||||
`cloudbase-init <https://cloudbase.it/cloudbase-init>`_.
|
@ -1,28 +0,0 @@
|
||||
======================
|
||||
Advanced configuration
|
||||
======================
|
||||
|
||||
OpenStack clouds run on platforms that differ greatly in the capabilities that
|
||||
they provide. By default, the Compute service seeks to abstract the underlying
|
||||
hardware that it runs on, rather than exposing specifics about the underlying
|
||||
host platforms. This abstraction manifests itself in many ways. For example,
|
||||
rather than exposing the types and topologies of CPUs running on hosts, the
|
||||
service exposes a number of generic CPUs (virtual CPUs, or vCPUs) and allows
|
||||
for overcommitting of these. In a similar manner, rather than exposing the
|
||||
individual types of network devices available on hosts, generic
|
||||
software-powered network ports are provided. These features are designed to
|
||||
allow high resource utilization and allows the service to provide a generic
|
||||
cost-effective and highly scalable cloud upon which to build applications.
|
||||
|
||||
This abstraction is beneficial for most workloads. However, there are some
|
||||
workloads where determinism and per-instance performance are important, if
|
||||
not vital. In these cases, instances can be expected to deliver near-native
|
||||
performance. The Compute service provides features to improve individual
|
||||
instance for these kind of workloads.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
compute-pci-passthrough
|
||||
compute-cpu-topologies
|
||||
compute-huge-pages
|
@ -1,370 +0,0 @@
|
||||
===================
|
||||
System architecture
|
||||
===================
|
||||
|
||||
OpenStack Compute contains several main components.
|
||||
|
||||
- The :term:`cloud controller` represents the global state and interacts with
|
||||
the other components. The ``API server`` acts as the web services
|
||||
front end for the cloud controller. The ``compute controller``
|
||||
provides compute server resources and usually also contains the
|
||||
Compute service.
|
||||
|
||||
- The ``object store`` is an optional component that provides storage
|
||||
services; you can also use OpenStack Object Storage instead.
|
||||
|
||||
- An ``auth manager`` provides authentication and authorization
|
||||
services when used with the Compute system; you can also use
|
||||
OpenStack Identity as a separate authentication service instead.
|
||||
|
||||
- A ``volume controller`` provides fast and permanent block-level
|
||||
storage for the compute servers.
|
||||
|
||||
- The ``network controller`` provides virtual networks to enable
|
||||
compute servers to interact with each other and with the public
|
||||
network. You can also use OpenStack Networking instead.
|
||||
|
||||
- The ``scheduler`` is used to select the most suitable compute
|
||||
controller to host an instance.
|
||||
|
||||
Compute uses a messaging-based, ``shared nothing`` architecture. All
|
||||
major components exist on multiple servers, including the compute,
|
||||
volume, and network controllers, and the Object Storage or Image service.
|
||||
The state of the entire system is stored in a database. The cloud
|
||||
controller communicates with the internal object store using HTTP, but
|
||||
it communicates with the scheduler, network controller, and volume
|
||||
controller using Advanced Message Queuing Protocol (AMQP). To avoid
|
||||
blocking a component while waiting for a response, Compute uses
|
||||
asynchronous calls, with a callback that is triggered when a response is
|
||||
received.
|
||||
|
||||
Hypervisors
|
||||
~~~~~~~~~~~
|
||||
Compute controls hypervisors through an API server. Selecting the best
|
||||
hypervisor to use can be difficult, and you must take budget, resource
|
||||
constraints, supported features, and required technical specifications
|
||||
into account. However, the majority of OpenStack development is done on
|
||||
systems using KVM and Xen-based hypervisors. For a detailed list of
|
||||
features and support across different hypervisors, see the
|
||||
`Feature Support Matrix
|
||||
<https://docs.openstack.org/developer/nova/support-matrix.html>`_.
|
||||
|
||||
You can also orchestrate clouds using multiple hypervisors in different
|
||||
availability zones. Compute supports the following hypervisors:
|
||||
|
||||
- `Baremetal <https://wiki.openstack.org/wiki/Ironic>`__
|
||||
|
||||
- `Docker <https://www.docker.io>`__
|
||||
|
||||
- `Hyper-V <http://www.microsoft.com/en-us/server-cloud/hyper-v-server/default.aspx>`__
|
||||
|
||||
- `Kernel-based Virtual Machine
|
||||
(KVM) <http://www.linux-kvm.org/page/Main_Page>`__
|
||||
|
||||
- `Linux Containers (LXC) <https://linuxcontainers.org/>`__
|
||||
|
||||
- `Quick Emulator (QEMU) <http://wiki.qemu.org/Manual>`__
|
||||
|
||||
- `User Mode Linux (UML) <http://user-mode-linux.sourceforge.net/>`__
|
||||
|
||||
- `VMware
|
||||
vSphere <http://www.vmware.com/products/vsphere-hypervisor/support.html>`__
|
||||
|
||||
- `Xen <http://www.xen.org/support/documentation.html>`__
|
||||
|
||||
For more information about hypervisors, see the
|
||||
`Hypervisors <https://docs.openstack.org/ocata/config-reference/compute/hypervisors.html>`__
|
||||
section in the OpenStack Configuration Reference.
|
||||
|
||||
Projects, users, and roles
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
The Compute system is designed to be used by different consumers in the
|
||||
form of projects on a shared system, and role-based access assignments.
|
||||
Roles control the actions that a user is allowed to perform.
|
||||
|
||||
Projects are isolated resource containers that form the principal
|
||||
organizational structure within the Compute service. They consist of an
|
||||
individual VLAN, and volumes, instances, images, keys, and users. A user
|
||||
can specify the project by appending ``project_id`` to their access key.
|
||||
If no project is specified in the API request, Compute attempts to use a
|
||||
project with the same ID as the user.
|
||||
|
||||
For projects, you can use quota controls to limit the:
|
||||
|
||||
- Number of volumes that can be launched.
|
||||
|
||||
- Number of processor cores and the amount of RAM that can be
|
||||
allocated.
|
||||
|
||||
- Floating IP addresses assigned to any instance when it launches. This
|
||||
allows instances to have the same publicly accessible IP addresses.
|
||||
|
||||
- Fixed IP addresses assigned to the same instance when it launches.
|
||||
This allows instances to have the same publicly or privately
|
||||
accessible IP addresses.
|
||||
|
||||
Roles control the actions a user is allowed to perform. By default, most
|
||||
actions do not require a particular role, but you can configure them by
|
||||
editing the ``policy.json`` file for user roles. For example, a rule can
|
||||
be defined so that a user must have the ``admin`` role in order to be
|
||||
able to allocate a public IP address.
|
||||
|
||||
A project limits users' access to particular images. Each user is
|
||||
assigned a user name and password. Keypairs granting access to an
|
||||
instance are enabled for each user, but quotas are set, so that each
|
||||
project can control resource consumption across available hardware
|
||||
resources.
|
||||
|
||||
.. note::
|
||||
|
||||
Earlier versions of OpenStack used the term ``tenant`` instead of
|
||||
``project``. Because of this legacy terminology, some command-line tools
|
||||
use ``--tenant_id`` where you would normally expect to enter a
|
||||
project ID.
|
||||
|
||||
Block storage
|
||||
~~~~~~~~~~~~~
|
||||
OpenStack provides two classes of block storage: ephemeral storage
|
||||
and persistent volume.
|
||||
|
||||
**Ephemeral storage**
|
||||
|
||||
Ephemeral storage includes a root ephemeral volume and an additional
|
||||
ephemeral volume.
|
||||
|
||||
The root disk is associated with an instance, and exists only for the
|
||||
life of this very instance. Generally, it is used to store an
|
||||
instance's root file system, persists across the guest operating system
|
||||
reboots, and is removed on an instance deletion. The amount of the root
|
||||
ephemeral volume is defined by the flavor of an instance.
|
||||
|
||||
In addition to the ephemeral root volume, all default types of flavors,
|
||||
except ``m1.tiny``, which is the smallest one, provide an additional
|
||||
ephemeral block device sized between 20 and 160 GB (a configurable value
|
||||
to suit an environment). It is represented as a raw block device with no
|
||||
partition table or file system. A cloud-aware operating system can
|
||||
discover, format, and mount such a storage device. OpenStack Compute
|
||||
defines the default file system for different operating systems as Ext4
|
||||
for Linux distributions, VFAT for non-Linux and non-Windows operating
|
||||
systems, and NTFS for Windows. However, it is possible to specify any
|
||||
other filesystem type by using ``virt_mkfs`` or
|
||||
``default_ephemeral_format`` configuration options.
|
||||
|
||||
.. note::
|
||||
|
||||
For example, the ``cloud-init`` package included into an Ubuntu's stock
|
||||
cloud image, by default, formats this space as an Ext4 file system
|
||||
and mounts it on ``/mnt``. This is a cloud-init feature, and is not
|
||||
an OpenStack mechanism. OpenStack only provisions the raw storage.
|
||||
|
||||
**Persistent volume**
|
||||
|
||||
A persistent volume is represented by a persistent virtualized block
|
||||
device independent of any particular instance, and provided by OpenStack
|
||||
Block Storage.
|
||||
|
||||
Only a single configured instance can access a persistent volume.
|
||||
Multiple instances cannot access a persistent volume. This type of
|
||||
configuration requires a traditional network file system to allow
|
||||
multiple instances accessing the persistent volume. It also requires a
|
||||
traditional network file system like NFS, CIFS, or a cluster file system
|
||||
such as GlusterFS. These systems can be built within an OpenStack
|
||||
cluster, or provisioned outside of it, but OpenStack software does not
|
||||
provide these features.
|
||||
|
||||
You can configure a persistent volume as bootable and use it to provide
|
||||
a persistent virtual instance similar to the traditional non-cloud-based
|
||||
virtualization system. It is still possible for the resulting instance
|
||||
to keep ephemeral storage, depending on the flavor selected. In this
|
||||
case, the root file system can be on the persistent volume, and its
|
||||
state is maintained, even if the instance is shut down. For more
|
||||
information about this type of configuration, see `Introduction to the
|
||||
Block Storage service <https://docs.openstack.org/ocata/config-reference/block-storage/block-storage-overview.html>`_
|
||||
in the OpenStack Configuration Reference.
|
||||
|
||||
.. note::
|
||||
|
||||
A persistent volume does not provide concurrent access from multiple
|
||||
instances. That type of configuration requires a traditional network
|
||||
file system like NFS, or CIFS, or a cluster file system such as
|
||||
GlusterFS. These systems can be built within an OpenStack cluster,
|
||||
or provisioned outside of it, but OpenStack software does not
|
||||
provide these features.
|
||||
|
||||
EC2 compatibility API
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
In addition to the native compute API, OpenStack provides an
|
||||
EC2-compatible API. This API allows EC2 legacy workflows built for EC2
|
||||
to work with OpenStack.
|
||||
|
||||
.. warning::
|
||||
|
||||
Nova in tree EC2-compatible API is deprecated.
|
||||
The `ec2-api project <https://git.openstack.org/cgit/openstack/ec2-api/>`_
|
||||
is working to implement the EC2 API.
|
||||
|
||||
You can use numerous third-party tools and language-specific SDKs to
|
||||
interact with OpenStack clouds. You can use both native and
|
||||
compatibility APIs. Some of the more popular third-party tools are:
|
||||
|
||||
Euca2ools
|
||||
A popular open source command-line tool for interacting with the EC2
|
||||
API. This is convenient for multi-cloud environments where EC2 is
|
||||
the common API, or for transitioning from EC2-based clouds to
|
||||
OpenStack. For more information, see the `Eucalyptus
|
||||
Documentation <http://docs.hpcloud.com/eucalyptus>`__.
|
||||
|
||||
Hybridfox
|
||||
A Firefox browser add-on that provides a graphical interface to many
|
||||
popular public and private cloud technologies, including OpenStack.
|
||||
For more information, see the `hybridfox
|
||||
site <http://code.google.com/p/hybridfox/>`__.
|
||||
|
||||
boto
|
||||
Python library for interacting with Amazon Web Services. You can use
|
||||
this library to access OpenStack through the EC2 compatibility API.
|
||||
For more information, see the `boto project page on
|
||||
GitHub <https://github.com/boto/boto>`__.
|
||||
|
||||
fog
|
||||
A Ruby cloud services library. It provides methods to interact
|
||||
with a large number of cloud and virtualization platforms, including
|
||||
OpenStack. For more information, see the `fog
|
||||
site <https://rubygems.org/gems/fog>`__.
|
||||
|
||||
php-opencloud
|
||||
A PHP SDK designed to work with most OpenStack-based cloud
|
||||
deployments, as well as Rackspace public cloud. For more
|
||||
information, see the `php-opencloud
|
||||
site <http://www.php-opencloud.com>`__.
|
||||
|
||||
Building blocks
|
||||
~~~~~~~~~~~~~~~
|
||||
In OpenStack the base operating system is usually copied from an image
|
||||
stored in the OpenStack Image service. This is the most common case and
|
||||
results in an ephemeral instance that starts from a known template state
|
||||
and loses all accumulated states on virtual machine deletion. It is also
|
||||
possible to put an operating system on a persistent volume in the
|
||||
OpenStack Block Storage volume system. This gives a more traditional
|
||||
persistent system that accumulates states which are preserved on the
|
||||
OpenStack Block Storage volume across the deletion and re-creation of
|
||||
the virtual machine. To get a list of available images on your system,
|
||||
run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack image list
|
||||
+--------------------------------------+-----------------------------+--------+
|
||||
| ID | Name | Status |
|
||||
+--------------------------------------+-----------------------------+--------+
|
||||
| aee1d242-730f-431f-88c1-87630c0f07ba | Ubuntu 14.04 cloudimg amd64 | active |
|
||||
| 0b27baa1-0ca6-49a7-b3f4-48388e440245 | Ubuntu 14.10 cloudimg amd64 | active |
|
||||
| df8d56fc-9cea-4dfd-a8d3-28764de3cb08 | jenkins | active |
|
||||
+--------------------------------------+-----------------------------+--------+
|
||||
|
||||
The displayed image attributes are:
|
||||
|
||||
``ID``
|
||||
Automatically generated UUID of the image
|
||||
|
||||
``Name``
|
||||
Free form, human-readable name for image
|
||||
|
||||
``Status``
|
||||
The status of the image. Images marked ``ACTIVE`` are available for
|
||||
use.
|
||||
|
||||
``Server``
|
||||
For images that are created as snapshots of running instances, this
|
||||
is the UUID of the instance the snapshot derives from. For uploaded
|
||||
images, this field is blank.
|
||||
|
||||
Virtual hardware templates are called ``flavors``. By default, these are
|
||||
configurable by admin users, however that behavior can be changed by
|
||||
redefining the access controls for ``compute_extension:flavormanage`` in
|
||||
``/etc/nova/policy.json`` on the ``compute-api`` server.
|
||||
|
||||
For a list of flavors that are available on your system:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor list
|
||||
+-----+-----------+-------+------+-----------+-------+-----------+
|
||||
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is_Public |
|
||||
+-----+-----------+-------+------+-----------+-------+-----------+
|
||||
| 1 | m1.tiny | 512 | 1 | 0 | 1 | True |
|
||||
| 2 | m1.small | 2048 | 20 | 0 | 1 | True |
|
||||
| 3 | m1.medium | 4096 | 40 | 0 | 2 | True |
|
||||
| 4 | m1.large | 8192 | 80 | 0 | 4 | True |
|
||||
| 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True |
|
||||
+-----+-----------+-------+------+-----------+-------+-----------+
|
||||
|
||||
Compute service architecture
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
These basic categories describe the service architecture and information
|
||||
about the cloud controller.
|
||||
|
||||
**API server**
|
||||
|
||||
At the heart of the cloud framework is an API server, which makes
|
||||
command and control of the hypervisor, storage, and networking
|
||||
programmatically available to users.
|
||||
|
||||
The API endpoints are basic HTTP web services which handle
|
||||
authentication, authorization, and basic command and control functions
|
||||
using various API interfaces under the Amazon, Rackspace, and related
|
||||
models. This enables API compatibility with multiple existing tool sets
|
||||
created for interaction with offerings from other vendors. This broad
|
||||
compatibility prevents vendor lock-in.
|
||||
|
||||
**Message queue**
|
||||
|
||||
A messaging queue brokers the interaction between compute nodes
|
||||
(processing), the networking controllers (software which controls
|
||||
network infrastructure), API endpoints, the scheduler (determines which
|
||||
physical hardware to allocate to a virtual resource), and similar
|
||||
components. Communication to and from the cloud controller is handled by
|
||||
HTTP requests through multiple API endpoints.
|
||||
|
||||
A typical message passing event begins with the API server receiving a
|
||||
request from a user. The API server authenticates the user and ensures
|
||||
that they are permitted to issue the subject command. The availability
|
||||
of objects implicated in the request is evaluated and, if available, the
|
||||
request is routed to the queuing engine for the relevant workers.
|
||||
Workers continually listen to the queue based on their role, and
|
||||
occasionally their type host name. When an applicable work request
|
||||
arrives on the queue, the worker takes assignment of the task and begins
|
||||
executing it. Upon completion, a response is dispatched to the queue
|
||||
which is received by the API server and relayed to the originating user.
|
||||
Database entries are queried, added, or removed as necessary during the
|
||||
process.
|
||||
|
||||
**Compute worker**
|
||||
|
||||
Compute workers manage computing instances on host machines. The API
|
||||
dispatches commands to compute workers to complete these tasks:
|
||||
|
||||
- Run instances
|
||||
|
||||
- Delete instances (Terminate instances)
|
||||
|
||||
- Reboot instances
|
||||
|
||||
- Attach volumes
|
||||
|
||||
- Detach volumes
|
||||
|
||||
- Get console output
|
||||
|
||||
**Network Controller**
|
||||
|
||||
The Network Controller manages the networking resources on host
|
||||
machines. The API server dispatches commands through the message queue,
|
||||
which are subsequently processed by Network Controllers. Specific
|
||||
operations include:
|
||||
|
||||
- Allocating fixed IP addresses
|
||||
|
||||
- Configuring VLANs for projects
|
||||
|
||||
- Configuring networks for compute nodes
|
@ -1,464 +0,0 @@
|
||||
.. _section_configuring-compute-migrations:
|
||||
|
||||
=========================
|
||||
Configure live migrations
|
||||
=========================
|
||||
|
||||
Migration enables an administrator to move a virtual machine instance
|
||||
from one compute host to another. A typical scenario is planned
|
||||
maintenance on the source host, but
|
||||
migration can also be useful to redistribute
|
||||
the load when many VM instances are running on a specific physical
|
||||
machine.
|
||||
|
||||
This document covers live migrations using the
|
||||
:ref:`configuring-migrations-kvm-libvirt`
|
||||
and :ref:`configuring-migrations-xenserver` hypervisors.
|
||||
|
||||
.. :ref:`_configuring-migrations-kvm-libvirt`
|
||||
.. :ref:`_configuring-migrations-xenserver`
|
||||
|
||||
.. note::
|
||||
|
||||
Not all Compute service hypervisor drivers support live-migration,
|
||||
or support all live-migration features.
|
||||
|
||||
Consult the `Hypervisor Support Matrix
|
||||
<https://docs.openstack.org/developer/nova/support-matrix.html>`_ to
|
||||
determine which hypervisors support live-migration.
|
||||
|
||||
See the `Hypervisor configuration pages
|
||||
<https://docs.openstack.org/ocata/config-reference/compute/hypervisors.html>`_
|
||||
for details on hypervisor-specific configuration settings.
|
||||
|
||||
The migration types are:
|
||||
|
||||
- **Non-live migration**, also known as cold migration or simply
|
||||
migration.
|
||||
|
||||
The instance is shut down, then moved to another
|
||||
hypervisor and restarted. The instance recognizes that it was
|
||||
rebooted, and the application running on the instance is disrupted.
|
||||
|
||||
This section does not cover cold migration.
|
||||
|
||||
- **Live migration**
|
||||
|
||||
The instance keeps running throughout the migration.
|
||||
This is useful when it is not possible or desirable to stop the application
|
||||
running on the instance.
|
||||
|
||||
Live migrations can be classified further by the way they treat instance
|
||||
storage:
|
||||
|
||||
- **Shared storage-based live migration**. The instance has ephemeral
|
||||
disks that are located on storage shared between the source and
|
||||
destination hosts.
|
||||
|
||||
- **Block live migration**, or simply block migration.
|
||||
The instance has ephemeral disks that
|
||||
are not shared between the source and destination hosts.
|
||||
Block migration is
|
||||
incompatible with read-only devices such as CD-ROMs and
|
||||
`Configuration Drive (config\_drive) <https://docs.openstack.org/user-guide/cli-config-drive.html>`_.
|
||||
|
||||
- **Volume-backed live migration**. Instances use volumes
|
||||
rather than ephemeral disks.
|
||||
|
||||
Block live migration requires copying disks from the source to the
|
||||
destination host. It takes more time and puts more load on the network.
|
||||
Shared-storage and volume-backed live migration does not copy disks.
|
||||
|
||||
.. note::
|
||||
|
||||
In a multi-cell cloud, instances can be live migrated to a
|
||||
different host in the same cell, but not across cells.
|
||||
|
||||
The following sections describe how to configure your hosts
|
||||
for live migrations using the KVM and XenServer hypervisors.
|
||||
|
||||
.. _configuring-migrations-kvm-libvirt:
|
||||
|
||||
KVM-libvirt
|
||||
~~~~~~~~~~~
|
||||
|
||||
.. :ref:`_configuring-migrations-kvm-general`
|
||||
.. :ref:`_configuring-migrations-kvm-block-and-volume-migration`
|
||||
.. :ref:`_configuring-migrations-kvm-shared-storage`
|
||||
|
||||
.. _configuring-migrations-kvm-general:
|
||||
|
||||
General configuration
|
||||
---------------------
|
||||
|
||||
To enable any type of live migration, configure the compute hosts according
|
||||
to the instructions below:
|
||||
|
||||
#. Set the following parameters in ``nova.conf`` on all compute hosts:
|
||||
|
||||
- ``vncserver_listen=0.0.0.0``
|
||||
|
||||
You must not make the VNC server listen to the IP address of its
|
||||
compute host, since that addresses changes when the instance is migrated.
|
||||
|
||||
.. important::
|
||||
Since this setting allows VNC clients from any IP address to connect
|
||||
to instance consoles, you must take additional measures like secure
|
||||
networks or firewalls to prevent potential attackers from gaining
|
||||
access to instances.
|
||||
|
||||
- ``instances_path`` must have the same value for all compute hosts.
|
||||
In this guide, the value ``/var/lib/nova/instances`` is assumed.
|
||||
|
||||
#. Ensure that name resolution on all compute hosts is identical, so
|
||||
that they can connect each other through their hostnames.
|
||||
|
||||
If you use ``/etc/hosts`` for name resolution and enable SELinux,
|
||||
ensure
|
||||
that ``/etc/hosts`` has the correct SELinux context:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# restorecon /etc/hosts
|
||||
|
||||
#. Enable password-less SSH so that
|
||||
root on one compute host can log on to any other compute host
|
||||
without providing a password.
|
||||
The ``libvirtd`` daemon, which runs as root,
|
||||
uses the SSH protocol to copy the instance to the destination
|
||||
and can't know the passwords of all compute hosts.
|
||||
|
||||
You may, for example, compile root's public SSH keys on all compute hosts
|
||||
into an ``authorized_keys`` file and deploy that file to the compute hosts.
|
||||
|
||||
#. Configure the firewalls to allow libvirt to
|
||||
communicate between compute hosts.
|
||||
|
||||
By default, libvirt uses the TCP
|
||||
port range from 49152 to 49261 for copying memory and disk contents.
|
||||
Compute hosts
|
||||
must accept connections in this range.
|
||||
|
||||
For information about ports used by libvirt,
|
||||
see the `libvirt documentation <http://libvirt.org/remote.html#Remote_libvirtd_configuration>`_.
|
||||
|
||||
.. important::
|
||||
Be mindful
|
||||
of the security risks introduced by opening ports.
|
||||
|
||||
.. _configuring-migrations-kvm-block-and-volume-migration:
|
||||
|
||||
Block migration, volume-based live migration
|
||||
--------------------------------------------
|
||||
|
||||
No additional configuration is required for block migration and volume-backed
|
||||
live migration.
|
||||
|
||||
Be aware that block migration adds load to the network and storage subsystems.
|
||||
|
||||
.. _configuring-migrations-kvm-shared-storage:
|
||||
|
||||
Shared storage
|
||||
--------------
|
||||
|
||||
Compute hosts have many options for sharing storage,
|
||||
for example NFS, shared disk array LUNs,
|
||||
Ceph or GlusterFS.
|
||||
|
||||
The next steps show how a regular Linux system
|
||||
might be configured as an NFS v4 server for live migration.
|
||||
For detailed information and alternative ways to configure
|
||||
NFS on Linux, see instructions for
|
||||
`Ubuntu <https://help.ubuntu.com/community/SettingUpNFSHowTo>`_,
|
||||
`RHEL and derivatives <https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/nfs-serverconfig.html>`_
|
||||
or `SLES and OpenSUSE <https://www.suse.com/documentation/sles-12/book_sle_admin/data/sec_nfs_configuring-nfs-server.html>`_.
|
||||
|
||||
#. Ensure that UID and GID of the nova user
|
||||
are identical on the compute hosts and the NFS server.
|
||||
|
||||
#. Create a directory
|
||||
with enough disk space for all
|
||||
instances in the cloud, owned by user nova. In this guide, we
|
||||
assume ``/var/lib/nova/instances``.
|
||||
|
||||
#. Set the execute/search bit on the ``instances`` directory:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ chmod o+x /var/lib/nova/instances
|
||||
|
||||
This allows qemu to access the ``instances`` directory tree.
|
||||
|
||||
#. Export ``/var/lib/nova/instances``
|
||||
to the compute hosts. For example, add the following line to
|
||||
``/etc/exports``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
/var/lib/nova/instances *(rw,sync,fsid=0,no_root_squash)
|
||||
|
||||
The asterisk permits access to any NFS client. The option ``fsid=0``
|
||||
exports the instances directory as the NFS root.
|
||||
|
||||
After setting up the NFS server, mount the remote filesystem
|
||||
on all compute hosts.
|
||||
|
||||
#. Assuming the NFS server's hostname is ``nfs-server``,
|
||||
add this line to ``/etc/fstab`` to mount the NFS root:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
nfs-server:/ /var/lib/nova/instances nfs4 defaults 0 0
|
||||
|
||||
#. Test NFS by mounting the instances directory and
|
||||
check access permissions for the nova user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo mount -a -v
|
||||
$ ls -ld /var/lib/nova/instances/
|
||||
drwxr-xr-x. 2 nova nova 6 Mar 14 21:30 /var/lib/nova/instances/
|
||||
|
||||
.. _configuring-migrations-kvm-advanced:
|
||||
|
||||
Advanced configuration for KVM and QEMU
|
||||
---------------------------------------
|
||||
|
||||
Live migration copies the instance's memory from the source to the
|
||||
destination compute host. After a memory page has been copied,
|
||||
the instance
|
||||
may write to it again, so that it has to be copied again.
|
||||
Instances that
|
||||
frequently write to different memory pages can overwhelm the
|
||||
memory copy
|
||||
process and prevent the live migration from completing.
|
||||
|
||||
This section covers configuration settings that can help live
|
||||
migration
|
||||
of memory-intensive instances succeed.
|
||||
|
||||
#. **Live migration completion timeout**
|
||||
|
||||
The Compute service aborts a migration when it has been running
|
||||
for too long.
|
||||
The timeout is calculated based on the instance size, which is the
|
||||
instance's
|
||||
memory size in GiB. In the case of block migration, the size of
|
||||
ephemeral storage in GiB is added.
|
||||
|
||||
The timeout in seconds is the instance size multiplied by the
|
||||
configurable parameter
|
||||
``live_migration_completion_timeout``, whose default is 800. For
|
||||
example,
|
||||
shared-storage live migration of an instance with 8GiB memory will
|
||||
time out after 6400 seconds.
|
||||
|
||||
#. **Live migration progress timeout**
|
||||
|
||||
The Compute service also aborts a live migration when it detects that
|
||||
memory copy is not making progress for a certain time. You can set
|
||||
this time, in seconds,
|
||||
through the configurable parameter
|
||||
``live_migration_progress_timeout``.
|
||||
|
||||
In Ocata,
|
||||
the default value of ``live_migration_progress_timeout`` is 0,
|
||||
which disables progress timeouts. You should not change
|
||||
this value, since the algorithm that detects memory copy progress
|
||||
has been determined to be unreliable. It may be re-enabled in
|
||||
future releases.
|
||||
|
||||
#. **Instance downtime**
|
||||
|
||||
Near the end of the memory copy, the instance is paused for a
|
||||
short time
|
||||
so that the remaining few pages can be copied without
|
||||
interference from
|
||||
instance memory writes. The Compute service initializes this
|
||||
time to a small
|
||||
value that depends on the instance size, typically around 50
|
||||
milliseconds. When
|
||||
it notices that the memory copy does not make sufficient
|
||||
progress, it increases
|
||||
the time gradually.
|
||||
|
||||
You can influence the instance downtime algorithm with the
|
||||
help of three
|
||||
configuration variables on the compute hosts:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
live_migration_downtime = 500
|
||||
live_migration_downtime_steps = 10
|
||||
live_migration_downtime_delay = 75
|
||||
|
||||
``live_migration_downtime`` sets the maximum permitted
|
||||
downtime for a live migration, in *milliseconds*.
|
||||
The default is 500.
|
||||
|
||||
``live_migration_downtime_steps`` sets the total number of
|
||||
adjustment steps until ``live_migration_downtime`` is reached.
|
||||
The default is 10 steps.
|
||||
|
||||
``live_migration_downtime_delay``
|
||||
sets the time interval between two
|
||||
adjustment steps in *seconds*. The default is 75.
|
||||
|
||||
#. **Auto-convergence**
|
||||
|
||||
One strategy for a successful live migration of a
|
||||
memory-intensive instance
|
||||
is slowing the instance down. This is called auto-convergence.
|
||||
Both libvirt and QEMU implement this feature by automatically
|
||||
throttling the instance's CPU when memory copy delays are detected.
|
||||
|
||||
Auto-convergence is disabled by default.
|
||||
You can enable it by setting
|
||||
``live_migration_permit_auto_convergence=true``.
|
||||
|
||||
.. caution::
|
||||
|
||||
Before enabling auto-convergence,
|
||||
make sure that the instance's application
|
||||
tolerates a slow-down.
|
||||
|
||||
Be aware that auto-convergence does not
|
||||
guarantee live migration success.
|
||||
|
||||
#. **Post-copy**
|
||||
|
||||
Live migration of a memory-intensive instance is certain to
|
||||
succeed
|
||||
when you
|
||||
enable post-copy. This feature, implemented by libvirt and
|
||||
QEMU, activates the
|
||||
virtual machine on the destination host before all of its
|
||||
memory has been copied.
|
||||
When the virtual machine accesses a page that is missing on
|
||||
the destination host,
|
||||
the resulting page fault is resolved by copying the page from
|
||||
the source host.
|
||||
|
||||
Post-copy is disabled by default. You can enable it by setting
|
||||
``live_migration_permit_post_copy=true``.
|
||||
|
||||
When you enable both auto-convergence and post-copy,
|
||||
auto-convergence remains
|
||||
disabled.
|
||||
|
||||
.. caution::
|
||||
|
||||
The page faults introduced by post-copy can slow the
|
||||
instance down.
|
||||
|
||||
When the network connection between source and destination
|
||||
host is
|
||||
interrupted, page faults cannot be resolved anymore and the
|
||||
instance
|
||||
is rebooted.
|
||||
|
||||
.. TODO Bernd: I *believe* that it is certain to succeed,
|
||||
.. but perhaps I am missing something.
|
||||
|
||||
The full list of live migration configuration parameters is documented
|
||||
in the `OpenStack Configuration Reference Guide
|
||||
<https://docs.openstack.org/ocata/config-reference/compute/config-options.html>`_
|
||||
|
||||
.. _configuring-migrations-xenserver:
|
||||
|
||||
XenServer
|
||||
~~~~~~~~~
|
||||
|
||||
.. :ref:Shared Storage
|
||||
.. :ref:Block migration
|
||||
|
||||
.. _configuring-migrations-xenserver-shared-storage:
|
||||
|
||||
Shared storage
|
||||
--------------
|
||||
|
||||
**Prerequisites**
|
||||
|
||||
- **Compatible XenServer hypervisors**. For more information, see the
|
||||
`Requirements for Creating Resource Pools <http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/reference.html#pooling_homogeneity_requirements>`_ section of the XenServer
|
||||
Administrator's Guide.
|
||||
|
||||
- **Shared storage**. An NFS export, visible to all XenServer hosts.
|
||||
|
||||
.. note::
|
||||
|
||||
For the supported NFS versions, see the
|
||||
`NFS VHD <http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/reference.html#id1002701>`_
|
||||
section of the XenServer Administrator's Guide.
|
||||
|
||||
To use shared storage live migration with XenServer hypervisors, the
|
||||
hosts must be joined to a XenServer pool. To create that pool, a host
|
||||
aggregate must be created with specific metadata. This metadata is used
|
||||
by the XAPI plug-ins to establish the pool.
|
||||
|
||||
**Using shared storage live migrations with XenServer Hypervisors**
|
||||
|
||||
#. Add an NFS VHD storage to your master XenServer, and set it as the
|
||||
default storage repository. For more information, see NFS VHD in the
|
||||
XenServer Administrator's Guide.
|
||||
|
||||
#. Configure all compute nodes to use the default storage repository
|
||||
(``sr``) for pool operations. Add this line to your ``nova.conf``
|
||||
configuration files on all compute nodes:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
sr_matching_filter=default-sr:true
|
||||
|
||||
#. Create a host aggregate. This command creates the aggregate, and then
|
||||
displays a table that contains the ID of the new aggregate
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack aggregate create --zone AVAILABILITY_ZONE POOL_NAME
|
||||
|
||||
Add metadata to the aggregate, to mark it as a hypervisor pool
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack aggregate set --property hypervisor_pool=true AGGREGATE_ID
|
||||
|
||||
$ openstack aggregate set --property operational_state=created AGGREGATE_ID
|
||||
|
||||
Make the first compute node part of that aggregate
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack aggregate add host AGGREGATE_ID MASTER_COMPUTE_NAME
|
||||
|
||||
The host is now part of a XenServer pool.
|
||||
|
||||
#. Add hosts to the pool
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack aggregate add host AGGREGATE_ID COMPUTE_HOST_NAME
|
||||
|
||||
.. note::
|
||||
|
||||
The added compute node and the host will shut down to join the host
|
||||
to the XenServer pool. The operation will fail if any server other
|
||||
than the compute node is running or suspended on the host.
|
||||
|
||||
.. _configuring-migrations-xenserver-block-migration:
|
||||
|
||||
Block migration
|
||||
---------------
|
||||
|
||||
- **Compatible XenServer hypervisors**.
|
||||
The hypervisors must support the Storage XenMotion feature.
|
||||
See your XenServer manual to make sure your edition
|
||||
has this feature.
|
||||
|
||||
.. note::
|
||||
|
||||
- To use block migration, you must use the ``--block-migrate``
|
||||
parameter with the live migration command.
|
||||
|
||||
- Block migration works only with EXT local storage storage
|
||||
repositories, and the server must not have any volumes attached.
|
@ -1,367 +0,0 @@
|
||||
.. _compute-cpu-topologies:
|
||||
|
||||
==============
|
||||
CPU topologies
|
||||
==============
|
||||
|
||||
The NUMA topology and CPU pinning features in OpenStack provide high-level
|
||||
control over how instances run on hypervisor CPUs and the topology of virtual
|
||||
CPUs available to instances. These features help minimize latency and maximize
|
||||
performance.
|
||||
|
||||
SMP, NUMA, and SMT
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Symmetric multiprocessing (SMP)
|
||||
SMP is a design found in many modern multi-core systems. In an SMP system,
|
||||
there are two or more CPUs and these CPUs are connected by some interconnect.
|
||||
This provides CPUs with equal access to system resources like memory and
|
||||
input/output ports.
|
||||
|
||||
Non-uniform memory access (NUMA)
|
||||
NUMA is a derivative of the SMP design that is found in many multi-socket
|
||||
systems. In a NUMA system, system memory is divided into cells or nodes that
|
||||
are associated with particular CPUs. Requests for memory on other nodes are
|
||||
possible through an interconnect bus. However, bandwidth across this shared
|
||||
bus is limited. As a result, competition for this resource can incur
|
||||
performance penalties.
|
||||
|
||||
Simultaneous Multi-Threading (SMT)
|
||||
SMT is a design complementary to SMP. Whereas CPUs in SMP systems share a bus
|
||||
and some memory, CPUs in SMT systems share many more components. CPUs that
|
||||
share components are known as thread siblings. All CPUs appear as usable
|
||||
CPUs on the system and can execute workloads in parallel. However, as with
|
||||
NUMA, threads compete for shared resources.
|
||||
|
||||
In OpenStack, SMP CPUs are known as *cores*, NUMA cells or nodes are known as
|
||||
*sockets*, and SMT CPUs are known as *threads*. For example, a quad-socket,
|
||||
eight core system with Hyper-Threading would have four sockets, eight cores per
|
||||
socket and two threads per core, for a total of 64 CPUs.
|
||||
|
||||
Configuring compute nodes for instances with NUMA placement policies
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Hyper-V is configured by default to allow instances to span multiple NUMA
|
||||
nodes, regardless if the instances have been configured to only span N NUMA
|
||||
nodes. This behaviour allows Hyper-V instances to have up to 64 vCPUs and 1 TB
|
||||
of memory.
|
||||
|
||||
Checking NUMA spanning can easily be done by running this following powershell
|
||||
command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
(Get-VMHost).NumaSpanningEnabled
|
||||
|
||||
In order to disable this behaviour, the host will have to be configured to
|
||||
disable NUMA spanning. This can be done by executing these following
|
||||
powershell commands:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Set-VMHost -NumaSpanningEnabled $false
|
||||
Restart-Service vmms
|
||||
|
||||
In order to restore this behaviour, execute these powershell commands:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Set-VMHost -NumaSpanningEnabled $true
|
||||
Restart-Service vmms
|
||||
|
||||
The ``vmms`` service (Virtual Machine Management Service) is responsible for
|
||||
managing the Hyper-V VMs. The VMs will still run while the service is down
|
||||
or restarting, but they will not be manageable by the ``nova-compute``
|
||||
service. In order for the effects of the Host NUMA spanning configuration
|
||||
to take effect, the VMs will have to be restarted.
|
||||
|
||||
Hyper-V does not allow instances with a NUMA topology to have dynamic
|
||||
memory allocation turned on. The Hyper-V driver will ignore the configured
|
||||
``dynamic_memory_ratio`` from the given ``nova.conf`` file when spawning
|
||||
instances with a NUMA topology.
|
||||
|
||||
Customizing instance NUMA placement policies
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. important::
|
||||
|
||||
The functionality described below is currently only supported by the
|
||||
libvirt/KVM and Hyper-V driver.
|
||||
|
||||
When running workloads on NUMA hosts, it is important that the vCPUs executing
|
||||
processes are on the same NUMA node as the memory used by these processes.
|
||||
This ensures all memory accesses are local to the node and thus do not consume
|
||||
the limited cross-node memory bandwidth, adding latency to memory accesses.
|
||||
Similarly, large pages are assigned from memory and benefit from the same
|
||||
performance improvements as memory allocated using standard pages. Thus, they
|
||||
also should be local. Finally, PCI devices are directly associated with
|
||||
specific NUMA nodes for the purposes of DMA. Instances that use PCI or SR-IOV
|
||||
devices should be placed on the NUMA node associated with these devices.
|
||||
|
||||
By default, an instance floats across all NUMA nodes on a host. NUMA awareness
|
||||
can be enabled implicitly through the use of huge pages or pinned CPUs or
|
||||
explicitly through the use of flavor extra specs or image metadata. In all
|
||||
cases, the ``NUMATopologyFilter`` filter must be enabled. Details on this
|
||||
filter are provided in `Scheduling`_ configuration guide.
|
||||
|
||||
.. caution::
|
||||
|
||||
The NUMA node(s) used are normally chosen at random. However, if a PCI
|
||||
passthrough or SR-IOV device is attached to the instance, then the NUMA
|
||||
node that the device is associated with will be used. This can provide
|
||||
important performance improvements. However, booting a large number of
|
||||
similar instances can result in unbalanced NUMA node usage. Care should
|
||||
be taken to mitigate this issue. See this `discussion`_ for more details.
|
||||
|
||||
.. caution::
|
||||
|
||||
Inadequate per-node resources will result in scheduling failures. Resources
|
||||
that are specific to a node include not only CPUs and memory, but also PCI
|
||||
and SR-IOV resources. It is not possible to use multiple resources from
|
||||
different nodes without requesting a multi-node layout. As such, it may be
|
||||
necessary to ensure PCI or SR-IOV resources are associated with the same
|
||||
NUMA node or force a multi-node layout.
|
||||
|
||||
When used, NUMA awareness allows the operating system of the instance to
|
||||
intelligently schedule the workloads that it runs and minimize cross-node
|
||||
memory bandwidth. To restrict an instance's vCPUs to a single host NUMA node,
|
||||
run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set m1.large --property hw:numa_nodes=1
|
||||
|
||||
Some workloads have very demanding requirements for memory access latency or
|
||||
bandwidth that exceed the memory bandwidth available from a single NUMA node.
|
||||
For such workloads, it is beneficial to spread the instance across multiple
|
||||
host NUMA nodes, even if the instance's RAM/vCPUs could theoretically fit on a
|
||||
single NUMA node. To force an instance's vCPUs to spread across two host NUMA
|
||||
nodes, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set m1.large --property hw:numa_nodes=2
|
||||
|
||||
The allocation of instances vCPUs and memory from different host NUMA nodes can
|
||||
be configured. This allows for asymmetric allocation of vCPUs and memory, which
|
||||
can be important for some workloads. To spread the 6 vCPUs and 6 GB of memory
|
||||
of an instance across two NUMA nodes and create an asymmetric 1:2 vCPU and
|
||||
memory mapping between the two nodes, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set m1.large --property hw:numa_nodes=2
|
||||
$ openstack flavor set m1.large \ # configure guest node 0
|
||||
--property hw:numa_cpus.0=0,1 \
|
||||
--property hw:numa_mem.0=2048
|
||||
$ openstack flavor set m1.large \ # configure guest node 1
|
||||
--property hw:numa_cpus.1=2,3,4,5 \
|
||||
--property hw:numa_mem.1=4096
|
||||
|
||||
.. note::
|
||||
|
||||
Hyper-V does not support asymmetric NUMA topologies, and the Hyper-V
|
||||
driver will not spawn instances with such topologies.
|
||||
|
||||
For more information about the syntax for ``hw:numa_nodes``, ``hw:numa_cpus.N``
|
||||
and ``hw:num_mem.N``, refer to the `Flavors`_ guide.
|
||||
|
||||
Customizing instance CPU pinning policies
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. important::
|
||||
|
||||
The functionality described below is currently only supported by the
|
||||
libvirt/KVM driver. Hyper-V does not support CPU pinning.
|
||||
|
||||
By default, instance vCPU processes are not assigned to any particular host
|
||||
CPU, instead, they float across host CPUs like any other process. This allows
|
||||
for features like overcommitting of CPUs. In heavily contended systems, this
|
||||
provides optimal system performance at the expense of performance and latency
|
||||
for individual instances.
|
||||
|
||||
Some workloads require real-time or near real-time behavior, which is not
|
||||
possible with the latency introduced by the default CPU policy. For such
|
||||
workloads, it is beneficial to control which host CPUs are bound to an
|
||||
instance's vCPUs. This process is known as pinning. No instance with pinned
|
||||
CPUs can use the CPUs of another pinned instance, thus preventing resource
|
||||
contention between instances. To configure a flavor to use pinned vCPUs, a
|
||||
use a dedicated CPU policy. To force this, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set m1.large --property hw:cpu_policy=dedicated
|
||||
|
||||
.. caution::
|
||||
|
||||
Host aggregates should be used to separate pinned instances from unpinned
|
||||
instances as the latter will not respect the resourcing requirements of
|
||||
the former.
|
||||
|
||||
When running workloads on SMT hosts, it is important to be aware of the impact
|
||||
that thread siblings can have. Thread siblings share a number of components
|
||||
and contention on these components can impact performance. To configure how
|
||||
to use threads, a CPU thread policy should be specified. For workloads where
|
||||
sharing benefits performance, use thread siblings. To force this, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set m1.large \
|
||||
--property hw:cpu_policy=dedicated \
|
||||
--property hw:cpu_thread_policy=require
|
||||
|
||||
For other workloads where performance is impacted by contention for resources,
|
||||
use non-thread siblings or non-SMT hosts. To force this, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set m1.large \
|
||||
--property hw:cpu_policy=dedicated \
|
||||
--property hw:cpu_thread_policy=isolate
|
||||
|
||||
Finally, for workloads where performance is minimally impacted, use thread
|
||||
siblings if available. This is the default, but it can be set explicitly:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set m1.large \
|
||||
--property hw:cpu_policy=dedicated \
|
||||
--property hw:cpu_thread_policy=prefer
|
||||
|
||||
For more information about the syntax for ``hw:cpu_policy`` and
|
||||
``hw:cpu_thread_policy``, refer to the `Flavors`_ guide.
|
||||
|
||||
Applications are frequently packaged as images. For applications that require
|
||||
real-time or near real-time behavior, configure image metadata to ensure
|
||||
created instances are always pinned regardless of flavor. To configure an
|
||||
image to use pinned vCPUs and avoid thread siblings, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack image set [IMAGE_ID] \
|
||||
--property hw_cpu_policy=dedicated \
|
||||
--property hw_cpu_thread_policy=isolate
|
||||
|
||||
If the flavor specifies a CPU policy of ``dedicated`` then that policy will be
|
||||
used. If the flavor explicitly specifies a CPU policy of ``shared`` and the
|
||||
image specifies no policy or a policy of ``shared`` then the ``shared`` policy
|
||||
will be used, but if the image specifies a policy of ``dedicated`` an exception
|
||||
will be raised. By setting a ``shared`` policy through flavor extra-specs,
|
||||
administrators can prevent users configuring CPU policies in images and
|
||||
impacting resource utilization. To configure this policy, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set m1.large --property hw:cpu_policy=shared
|
||||
|
||||
If the flavor does not specify a CPU thread policy then the CPU thread policy
|
||||
specified by the image (if any) will be used. If both the flavor and image
|
||||
specify a CPU thread policy then they must specify the same policy, otherwise
|
||||
an exception will be raised.
|
||||
|
||||
.. note::
|
||||
|
||||
There is no correlation required between the NUMA topology exposed in the
|
||||
instance and how the instance is actually pinned on the host. This is by
|
||||
design. See this `invalid bug
|
||||
<https://bugs.launchpad.net/nova/+bug/1466780>`_ for more information.
|
||||
|
||||
For more information about image metadata, refer to the `Image metadata`_
|
||||
guide.
|
||||
|
||||
Customizing instance CPU topologies
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. important::
|
||||
|
||||
The functionality described below is currently only supported by the
|
||||
libvirt/KVM driver.
|
||||
|
||||
In addition to configuring how an instance is scheduled on host CPUs, it is
|
||||
possible to configure how CPUs are represented in the instance itself. By
|
||||
default, when instance NUMA placement is not specified, a topology of N
|
||||
sockets, each with one core and one thread, is used for an instance, where N
|
||||
corresponds to the number of instance vCPUs requested. When instance NUMA
|
||||
placement is specified, the number of sockets is fixed to the number of host
|
||||
NUMA nodes to use and the total number of instance CPUs is split over these
|
||||
sockets.
|
||||
|
||||
Some workloads benefit from a custom topology. For example, in some operating
|
||||
systems, a different license may be needed depending on the number of CPU
|
||||
sockets. To configure a flavor to use a maximum of two sockets, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set m1.large --property hw:cpu_sockets=2
|
||||
|
||||
Similarly, to configure a flavor to use one core and one thread, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set m1.large \
|
||||
--property hw:cpu_cores=1 \
|
||||
--property hw:cpu_threads=1
|
||||
|
||||
.. caution::
|
||||
|
||||
If specifying all values, the product of sockets multiplied by cores
|
||||
multiplied by threads must equal the number of instance vCPUs. If specifying
|
||||
any one of these values or the multiple of two values, the values must be a
|
||||
factor of the number of instance vCPUs to prevent an exception. For example,
|
||||
specifying ``hw:cpu_sockets=2`` on a host with an odd number of cores fails.
|
||||
Similarly, specifying ``hw:cpu_cores=2`` and ``hw:cpu_threads=4`` on a host
|
||||
with ten cores fails.
|
||||
|
||||
For more information about the syntax for ``hw:cpu_sockets``, ``hw:cpu_cores``
|
||||
and ``hw:cpu_threads``, refer to the `Flavors`_ guide.
|
||||
|
||||
It is also possible to set upper limits on the number of sockets, cores, and
|
||||
threads used. Unlike the hard values above, it is not necessary for this exact
|
||||
number to used because it only provides a limit. This can be used to provide
|
||||
some flexibility in scheduling, while ensuring certains limits are not
|
||||
exceeded. For example, to ensure no more than two sockets are defined in the
|
||||
instance topology, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set m1.large --property=hw:cpu_max_sockets=2
|
||||
|
||||
For more information about the syntax for ``hw:cpu_max_sockets``,
|
||||
``hw:cpu_max_cores``, and ``hw:cpu_max_threads``, refer to the `Flavors`_
|
||||
guide.
|
||||
|
||||
Applications are frequently packaged as images. For applications that prefer
|
||||
certain CPU topologies, configure image metadata to hint that created instances
|
||||
should have a given topology regardless of flavor. To configure an image to
|
||||
request a two-socket, four-core per socket topology, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack image set [IMAGE_ID] \
|
||||
--property hw_cpu_sockets=2 \
|
||||
--property hw_cpu_cores=4
|
||||
|
||||
To constrain instances to a given limit of sockets, cores or threads, use the
|
||||
``max_`` variants. To configure an image to have a maximum of two sockets and a
|
||||
maximum of one thread, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack image set [IMAGE_ID] \
|
||||
--property hw_cpu_max_sockets=2 \
|
||||
--property hw_cpu_max_threads=1
|
||||
|
||||
The value specified in the flavor is treated as the abolute limit. The image
|
||||
limits are not permitted to exceed the flavor limits, they can only be equal
|
||||
to or lower than what the flavor defines. By setting a ``max`` value for
|
||||
sockets, cores, or threads, administrators can prevent users configuring
|
||||
topologies that might, for example, incur an additional licensing fees.
|
||||
|
||||
For more information about image metadata, refer to the `Image metadata`_
|
||||
guide.
|
||||
|
||||
.. Links
|
||||
.. _`Scheduling`: https://docs.openstack.org/ocata/config-reference/compute/schedulers.html
|
||||
.. _`Flavors`: https://docs.openstack.org/admin-guide/compute-flavors.html
|
||||
.. _`Image metadata`: https://docs.openstack.org/image-guide/image-metadata.html
|
||||
.. _`discussion`: http://lists.openstack.org/pipermail/openstack-dev/2016-March/090367.html
|
@ -1,33 +0,0 @@
|
||||
.. _default_ports:
|
||||
|
||||
==========================================
|
||||
Compute service node firewall requirements
|
||||
==========================================
|
||||
|
||||
Console connections for virtual machines, whether direct or through a
|
||||
proxy, are received on ports ``5900`` to ``5999``. The firewall on each
|
||||
Compute service node must allow network traffic on these ports.
|
||||
|
||||
This procedure modifies the iptables firewall to allow incoming
|
||||
connections to the Compute services.
|
||||
|
||||
**Configuring the service-node firewall**
|
||||
|
||||
#. Log in to the server that hosts the Compute service, as root.
|
||||
|
||||
#. Edit the ``/etc/sysconfig/iptables`` file, to add an INPUT rule that
|
||||
allows TCP traffic on ports from ``5900`` to ``5999``. Make sure the new
|
||||
rule appears before any INPUT rules that REJECT traffic:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
-A INPUT -p tcp -m multiport --dports 5900:5999 -j ACCEPT
|
||||
|
||||
#. Save the changes to the ``/etc/sysconfig/iptables`` file, and restart the
|
||||
``iptables`` service to pick up the changes:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ service iptables restart
|
||||
|
||||
#. Repeat this process for each Compute service node.
|
@ -1,10 +0,0 @@
|
||||
.. _section_euca2ools:
|
||||
|
||||
=================================
|
||||
Managing the cloud with euca2ools
|
||||
=================================
|
||||
|
||||
The ``euca2ools`` command-line tool provides a command line interface to
|
||||
EC2 API calls. For more information, see the `Official Eucalyptus Documentation
|
||||
<http://docs.hpcloud.com/eucalyptus/>`_.
|
||||
|
@ -1,548 +0,0 @@
|
||||
.. _compute-flavors:
|
||||
|
||||
=======
|
||||
Flavors
|
||||
=======
|
||||
|
||||
Admin users can use the :command:`openstack flavor` command to customize and
|
||||
manage flavors. To see information for this command, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor --help
|
||||
Command "flavor" matches:
|
||||
flavor create
|
||||
flavor delete
|
||||
flavor list
|
||||
flavor set
|
||||
flavor show
|
||||
flavor unset
|
||||
|
||||
.. note::
|
||||
|
||||
- Configuration rights can be delegated to additional users by
|
||||
redefining the access controls for
|
||||
``compute_extension:flavormanage`` in ``/etc/nova/policy.json``
|
||||
on the ``nova-api`` server.
|
||||
|
||||
- The Dashboard simulates the ability to modify a flavor
|
||||
by deleting an existing flavor and creating a new one with the same name.
|
||||
|
||||
Flavors define these elements:
|
||||
|
||||
+-------------+---------------------------------------------------------------+
|
||||
| Element | Description |
|
||||
+=============+===============================================================+
|
||||
| Name | A descriptive name. XX.SIZE_NAME is typically not required, |
|
||||
| | though some third party tools may rely on it. |
|
||||
+-------------+---------------------------------------------------------------+
|
||||
| Memory MB | Instance memory in megabytes. |
|
||||
+-------------+---------------------------------------------------------------+
|
||||
| Disk | Virtual root disk size in gigabytes. This is an ephemeral di\ |
|
||||
| | sk that the base image is copied into. When booting from a p\ |
|
||||
| | ersistent volume it is not used. The "0" size is a special c\ |
|
||||
| | ase which uses the native base image size as the size of the |
|
||||
| | ephemeral root volume. However, in this case the filter |
|
||||
| | scheduler cannot select the compute host based on the virtual |
|
||||
| | image size. Therefore 0 should only be used for volume booted |
|
||||
| | instances or for testing purposes. |
|
||||
+-------------+---------------------------------------------------------------+
|
||||
| Ephemeral | Specifies the size of a secondary ephemeral data disk. This |
|
||||
| | is an empty, unformatted disk and exists only for the life o\ |
|
||||
| | f the instance. Default value is ``0``. |
|
||||
+-------------+---------------------------------------------------------------+
|
||||
| Swap | Optional swap space allocation for the instance. Default |
|
||||
| | value is ``0``. |
|
||||
+-------------+---------------------------------------------------------------+
|
||||
| VCPUs | Number of virtual CPUs presented to the instance. |
|
||||
+-------------+---------------------------------------------------------------+
|
||||
| RXTX Factor | Optional property allows created servers to have a different |
|
||||
| | bandwidth cap than that defined in the network they are att\ |
|
||||
| | ached to. This factor is multiplied by the rxtx_base propert\ |
|
||||
| | y of the network. Default value is ``1.0``. That is, the same |
|
||||
| | as attached network. This parameter is only available for Xen |
|
||||
| | or NSX based systems. |
|
||||
+-------------+---------------------------------------------------------------+
|
||||
| Is Public | Boolean value, whether flavor is available to all users or p\ |
|
||||
| | rivate to the project it was created in. Defaults to ``True``.|
|
||||
+-------------+---------------------------------------------------------------+
|
||||
| Extra Specs | Key and value pairs that define on which compute nodes a fla\ |
|
||||
| | vor can run. These pairs must match corresponding pairs on t\ |
|
||||
| | he compute nodes. Use to implement special resources, such a\ |
|
||||
| | s flavors that run on only compute nodes with GPU hardware. |
|
||||
+-------------+---------------------------------------------------------------+
|
||||
|
||||
.. note::
|
||||
|
||||
Flavor customization can be limited by the hypervisor in use. For
|
||||
example the libvirt driver enables quotas on CPUs available to a VM,
|
||||
disk tuning, bandwidth I/O, watchdog behavior, random number generator
|
||||
device control, and instance VIF traffic control.
|
||||
|
||||
Is Public
|
||||
~~~~~~~~~
|
||||
|
||||
Flavors can be assigned to particular projects. By default, a flavor is public
|
||||
and available to all projects. Private flavors are only accessible to those on
|
||||
the access list and are invisible to other projects. To create and assign a
|
||||
private flavor to a project, run this command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor create --private p1.medium --id auto --ram 512 --disk 40 --vcpus 4
|
||||
|
||||
Extra Specs
|
||||
~~~~~~~~~~~
|
||||
|
||||
CPU limits
|
||||
You can configure the CPU limits with control parameters with the
|
||||
``nova`` client. For example, to configure the I/O limit, use:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set FLAVOR-NAME \
|
||||
--property quota:read_bytes_sec=10240000 \
|
||||
--property quota:write_bytes_sec=10240000
|
||||
|
||||
Use these optional parameters to control weight shares, enforcement
|
||||
intervals for runtime quotas, and a quota for maximum allowed
|
||||
bandwidth:
|
||||
|
||||
- ``cpu_shares``: Specifies the proportional weighted share for the
|
||||
domain. If this element is omitted, the service defaults to the
|
||||
OS provided defaults. There is no unit for the value; it is a
|
||||
relative measure based on the setting of other VMs. For example,
|
||||
a VM configured with value 2048 gets twice as much CPU time as a
|
||||
VM configured with value 1024.
|
||||
|
||||
- ``cpu_shares_level``: On VMware, specifies the allocation level. Can
|
||||
be ``custom``, ``high``, ``normal``, or ``low``. If you choose
|
||||
``custom``, set the number of shares using ``cpu_shares_share``.
|
||||
|
||||
- ``cpu_period``: Specifies the enforcement interval (unit:
|
||||
microseconds) for QEMU and LXC hypervisors. Within a period, each
|
||||
VCPU of the domain is not allowed to consume more than the quota
|
||||
worth of runtime. The value should be in range ``[1000, 1000000]``.
|
||||
A period with value 0 means no value.
|
||||
|
||||
- ``cpu_limit``: Specifies the upper limit for VMware machine CPU
|
||||
allocation in MHz. This parameter ensures that a machine never
|
||||
uses more than the defined amount of CPU time. It can be used to
|
||||
enforce a limit on the machine's CPU performance.
|
||||
|
||||
- ``cpu_reservation``: Specifies the guaranteed minimum CPU
|
||||
reservation in MHz for VMware. This means that if needed, the
|
||||
machine will definitely get allocated the reserved amount of CPU
|
||||
cycles.
|
||||
|
||||
- ``cpu_quota``: Specifies the maximum allowed bandwidth (unit:
|
||||
microseconds). A domain with a negative-value quota indicates
|
||||
that the domain has infinite bandwidth, which means that it is
|
||||
not bandwidth controlled. The value should be in range ``[1000,
|
||||
18446744073709551]`` or less than 0. A quota with value 0 means no
|
||||
value. You can use this feature to ensure that all vCPUs run at the
|
||||
same speed. For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set FLAVOR-NAME \
|
||||
--property quota:cpu_quota=10000 \
|
||||
--property quota:cpu_period=20000
|
||||
|
||||
In this example, an instance of ``FLAVOR-NAME`` can only consume
|
||||
a maximum of 50% CPU of a physical CPU computing capability.
|
||||
|
||||
Memory limits
|
||||
For VMware, you can configure the memory limits with control parameters.
|
||||
|
||||
Use these optional parameters to limit the memory allocation,
|
||||
guarantee minimum memory reservation, and to specify shares
|
||||
used in case of resource contention:
|
||||
|
||||
- ``memory_limit``: Specifies the upper limit for VMware machine
|
||||
memory allocation in MB. The utilization of a virtual machine will
|
||||
not exceed this limit, even if there are available resources. This
|
||||
is typically used to ensure a consistent performance of
|
||||
virtual machines independent of available resources.
|
||||
|
||||
- ``memory_reservation``: Specifies the guaranteed minimum memory
|
||||
reservation in MB for VMware. This means the specified amount of
|
||||
memory will definitely be allocated to the machine.
|
||||
|
||||
- ``memory_shares_level``: On VMware, specifies the allocation level.
|
||||
This can be ``custom``, ``high``, ``normal`` or ``low``. If you choose
|
||||
``custom``, set the number of shares using ``memory_shares_share``.
|
||||
|
||||
- ``memory_shares_share``: Specifies the number of shares allocated
|
||||
in the event that ``custom`` is used. There is no unit for this
|
||||
value. It is a relative measure based on the settings for other VMs.
|
||||
For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set FLAVOR-NAME \
|
||||
--property quota:memory_shares_level=custom \
|
||||
--property quota:memory_shares_share=15
|
||||
|
||||
Disk I/O limits
|
||||
For VMware, you can configure the resource limits for disk
|
||||
with control parameters.
|
||||
|
||||
Use these optional parameters to limit the disk utilization,
|
||||
guarantee disk allocation, and to specify shares
|
||||
used in case of resource contention. This allows the VMware
|
||||
driver to enable disk allocations for the running instance.
|
||||
|
||||
- ``disk_io_limit``: Specifies the upper limit for disk
|
||||
utilization in I/O per second. The utilization of a
|
||||
virtual machine will not exceed this limit, even
|
||||
if there are available resources. The default value
|
||||
is -1 which indicates unlimited usage.
|
||||
|
||||
- ``disk_io_reservation``: Specifies the guaranteed minimum disk
|
||||
allocation in terms of :term:`IOPS <Input/output Operations Per
|
||||
Second (IOPS)>`.
|
||||
|
||||
- ``disk_io_shares_level``: Specifies the allocation
|
||||
level. This can be ``custom``, ``high``, ``normal`` or ``low``.
|
||||
If you choose custom, set the number of shares
|
||||
using ``disk_io_shares_share``.
|
||||
|
||||
- ``disk_io_shares_share``: Specifies the number of shares
|
||||
allocated in the event that ``custom`` is used.
|
||||
When there is resource contention, this value is used
|
||||
to determine the resource allocation.
|
||||
|
||||
The example below sets the ``disk_io_reservation`` to 2000 IOPS.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set FLAVOR-NAME \
|
||||
--property quota:disk_io_reservation=2000
|
||||
|
||||
Disk tuning
|
||||
Using disk I/O quotas, you can set maximum disk write to 10 MB per
|
||||
second for a VM user. For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set FLAVOR-NAME \
|
||||
--property quota:disk_write_bytes_sec=10485760
|
||||
|
||||
The disk I/O options are:
|
||||
|
||||
- ``disk_read_bytes_sec``
|
||||
- ``disk_read_iops_sec``
|
||||
- ``disk_write_bytes_sec``
|
||||
- ``disk_write_iops_sec``
|
||||
- ``disk_total_bytes_sec``
|
||||
- ``disk_total_iops_sec``
|
||||
|
||||
Bandwidth I/O
|
||||
The vif I/O options are:
|
||||
|
||||
- ``vif_inbound_average``
|
||||
- ``vif_inbound_burst``
|
||||
- ``vif_inbound_peak``
|
||||
- ``vif_outbound_average``
|
||||
- ``vif_outbound_burst``
|
||||
- ``vif_outbound_peak``
|
||||
|
||||
Incoming and outgoing traffic can be shaped independently. The
|
||||
bandwidth element can have at most, one inbound and at most, one
|
||||
outbound child element. If you leave any of these child elements
|
||||
out, no :term:`quality of service (QoS)` is applied on that traffic
|
||||
direction. So, if you want to shape only the network's incoming
|
||||
traffic, use inbound only (and vice versa). Each element has one
|
||||
mandatory attribute average, which specifies the average bit rate on
|
||||
the interface being shaped.
|
||||
|
||||
There are also two optional attributes (integer): ``peak``, which
|
||||
specifies the maximum rate at which a bridge can send data
|
||||
(kilobytes/second), and ``burst``, the amount of bytes that can be
|
||||
burst at peak speed (kilobytes). The rate is shared equally within
|
||||
domains connected to the network.
|
||||
|
||||
The example below sets network traffic bandwidth limits for existing
|
||||
flavor as follows:
|
||||
|
||||
- Outbound traffic:
|
||||
|
||||
- average: 262 Mbps (32768 kilobytes/second)
|
||||
|
||||
- peak: 524 Mbps (65536 kilobytes/second)
|
||||
|
||||
- burst: 65536 kilobytes
|
||||
|
||||
- Inbound traffic:
|
||||
|
||||
- average: 262 Mbps (32768 kilobytes/second)
|
||||
|
||||
- peak: 524 Mbps (65536 kilobytes/second)
|
||||
|
||||
- burst: 65536 kilobytes
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set FLAVOR-NAME \
|
||||
--property quota:vif_outbound_average=32768 \
|
||||
--property quota:vif_outbound_peak=65536 \
|
||||
--property quota:vif_outbound_burst=65536 \
|
||||
--property quota:vif_inbound_average=32768 \
|
||||
--property quota:vif_inbound_peak=65536 \
|
||||
--property quota:vif_inbound_burst=65536
|
||||
|
||||
.. note::
|
||||
|
||||
All the speed limit values in above example are specified in
|
||||
kilobytes/second. And burst values are in kilobytes. Values
|
||||
were converted using 'Data rate units on
|
||||
Wikipedia <https://en.wikipedia.org/wiki/Data_rate_units>`_.
|
||||
|
||||
Watchdog behavior
|
||||
For the libvirt driver, you can enable and set the behavior of a
|
||||
virtual hardware watchdog device for each flavor. Watchdog devices
|
||||
keep an eye on the guest server, and carry out the configured
|
||||
action, if the server hangs. The watchdog uses the i6300esb device
|
||||
(emulating a PCI Intel 6300ESB). If ``hw:watchdog_action`` is not
|
||||
specified, the watchdog is disabled.
|
||||
|
||||
To set the behavior, use:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set FLAVOR-NAME --property hw:watchdog_action=ACTION
|
||||
|
||||
Valid ACTION values are:
|
||||
|
||||
- ``disabled``: (default) The device is not attached.
|
||||
- ``reset``: Forcefully reset the guest.
|
||||
- ``poweroff``: Forcefully power off the guest.
|
||||
- ``pause``: Pause the guest.
|
||||
- ``none``: Only enable the watchdog; do nothing if the server hangs.
|
||||
|
||||
.. note::
|
||||
|
||||
Watchdog behavior set using a specific image's properties will
|
||||
override behavior set using flavors.
|
||||
|
||||
Random-number generator
|
||||
If a random-number generator device has been added to the instance
|
||||
through its image properties, the device can be enabled and
|
||||
configured using:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set FLAVOR-NAME \
|
||||
--property hw_rng:allowed=True \
|
||||
--property hw_rng:rate_bytes=RATE-BYTES \
|
||||
--property hw_rng:rate_period=RATE-PERIOD
|
||||
|
||||
Where:
|
||||
|
||||
- RATE-BYTES: (integer) Allowed amount of bytes that the guest can
|
||||
read from the host's entropy per period.
|
||||
- RATE-PERIOD: (integer) Duration of the read period in seconds.
|
||||
|
||||
CPU topology
|
||||
For the libvirt driver, you can define the topology of the processors
|
||||
in the virtual machine using properties. The properties with ``max``
|
||||
limit the number that can be selected by the user with image properties.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set FLAVOR-NAME \
|
||||
--property hw:cpu_sockets=FLAVOR-SOCKETS \
|
||||
--property hw:cpu_cores=FLAVOR-CORES \
|
||||
--property hw:cpu_threads=FLAVOR-THREADS \
|
||||
--property hw:cpu_max_sockets=FLAVOR-SOCKETS \
|
||||
--property hw:cpu_max_cores=FLAVOR-CORES \
|
||||
--property hw:cpu_max_threads=FLAVOR-THREADS
|
||||
|
||||
Where:
|
||||
|
||||
- FLAVOR-SOCKETS: (integer) The number of sockets for the guest VM. By
|
||||
default, this is set to the number of vCPUs requested.
|
||||
- FLAVOR-CORES: (integer) The number of cores per socket for the guest
|
||||
VM. By default, this is set to ``1``.
|
||||
- FLAVOR-THREADS: (integer) The number of threads per core for the guest
|
||||
VM. By default, this is set to ``1``.
|
||||
|
||||
CPU pinning policy
|
||||
For the libvirt driver, you can pin the virtual CPUs (vCPUs) of instances
|
||||
to the host's physical CPU cores (pCPUs) using properties. You can further
|
||||
refine this by stating how hardware CPU threads in a simultaneous
|
||||
multithreading-based (SMT) architecture be used. These configurations will
|
||||
result in improved per-instance determinism and performance.
|
||||
|
||||
.. note::
|
||||
|
||||
SMT-based architectures include Intel processors with Hyper-Threading
|
||||
technology. In these architectures, processor cores share a number of
|
||||
components with one or more other cores. Cores in such architectures
|
||||
are commonly referred to as hardware threads, while the cores that a
|
||||
given core share components with are known as thread siblings.
|
||||
|
||||
.. note::
|
||||
|
||||
Host aggregates should be used to separate these pinned instances
|
||||
from unpinned instances as the latter will not respect the resourcing
|
||||
requirements of the former.
|
||||
|
||||
.. code:: console
|
||||
|
||||
$ openstack flavor set FLAVOR-NAME \
|
||||
--property hw:cpu_policy=CPU-POLICY \
|
||||
--property hw:cpu_thread_policy=CPU-THREAD-POLICY
|
||||
|
||||
Valid CPU-POLICY values are:
|
||||
|
||||
- ``shared``: (default) The guest vCPUs will be allowed to freely float
|
||||
across host pCPUs, albeit potentially constrained by NUMA policy.
|
||||
- ``dedicated``: The guest vCPUs will be strictly pinned to a set of host
|
||||
pCPUs. In the absence of an explicit vCPU topology request, the drivers
|
||||
typically expose all vCPUs as sockets with one core and one thread.
|
||||
When strict CPU pinning is in effect the guest CPU topology will be
|
||||
setup to match the topology of the CPUs to which it is pinned. This
|
||||
option implies an overcommit ratio of 1.0. For example, if a two vCPU
|
||||
guest is pinned to a single host core with two threads, then the guest
|
||||
will get a topology of one socket, one core, two threads.
|
||||
|
||||
Valid CPU-THREAD-POLICY values are:
|
||||
|
||||
- ``prefer``: (default) The host may or may not have an SMT architecture.
|
||||
Where an SMT architecture is present, thread siblings are preferred.
|
||||
- ``isolate``: The host must not have an SMT architecture or must emulate
|
||||
a non-SMT architecture. If the host does not have an SMT architecture,
|
||||
each vCPU is placed on a different core as expected. If the host does
|
||||
have an SMT architecture - that is, one or more cores have thread
|
||||
siblings - then each vCPU is placed on a different physical core. No
|
||||
vCPUs from other guests are placed on the same core. All but one thread
|
||||
sibling on each utilized core is therefore guaranteed to be unusable.
|
||||
- ``require``: The host must have an SMT architecture. Each vCPU is
|
||||
allocated on thread siblings. If the host does not have an SMT
|
||||
architecture, then it is not used. If the host has an SMT architecture,
|
||||
but not enough cores with free thread siblings are available, then
|
||||
scheduling fails.
|
||||
|
||||
.. note::
|
||||
|
||||
The ``hw:cpu_thread_policy`` option is only valid if ``hw:cpu_policy``
|
||||
is set to ``dedicated``.
|
||||
|
||||
NUMA topology
|
||||
For the libvirt driver, you can define the host NUMA placement for the
|
||||
instance vCPU threads as well as the allocation of instance vCPUs and
|
||||
memory from the host NUMA nodes. For flavors whose memory and vCPU
|
||||
allocations are larger than the size of NUMA nodes in the compute hosts,
|
||||
the definition of a NUMA topology allows hosts to better utilize NUMA
|
||||
and improve performance of the instance OS.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set FLAVOR-NAME \
|
||||
--property hw:numa_nodes=FLAVOR-NODES \
|
||||
--property hw:numa_cpus.N=FLAVOR-CORES \
|
||||
--property hw:numa_mem.N=FLAVOR-MEMORY
|
||||
|
||||
Where:
|
||||
|
||||
- FLAVOR-NODES: (integer) The number of host NUMA nodes to restrict
|
||||
execution of instance vCPU threads to. If not specified, the vCPU
|
||||
threads can run on any number of the host NUMA nodes available.
|
||||
- N: (integer) The instance NUMA node to apply a given CPU or memory
|
||||
configuration to, where N is in the range ``0`` to ``FLAVOR-NODES``
|
||||
- ``1``.
|
||||
- FLAVOR-CORES: (comma-separated list of integers) A list of instance
|
||||
vCPUs to map to instance NUMA node N. If not specified, vCPUs are evenly
|
||||
divided among available NUMA nodes.
|
||||
- FLAVOR-MEMORY: (integer) The number of MB of instance memory to map to
|
||||
instance NUMA node N. If not specified, memory is evenly divided
|
||||
among available NUMA nodes.
|
||||
|
||||
.. note::
|
||||
|
||||
``hw:numa_cpus.N`` and ``hw:numa_mem.N`` are only valid if
|
||||
``hw:numa_nodes`` is set. Additionally, they are only required if the
|
||||
instance's NUMA nodes have an asymmetrical allocation of CPUs and RAM
|
||||
(important for some NFV workloads).
|
||||
|
||||
.. note::
|
||||
|
||||
The ``N`` parameter is an index of *guest* NUMA nodes and may not
|
||||
correspond to *host* NUMA nodes. For example, on a platform with two
|
||||
NUMA nodes, the scheduler may opt to place guest NUMA node 0, as
|
||||
referenced in ``hw:numa_mem.0`` on host NUMA node 1 and vice versa.
|
||||
Similarly, the integers used for ``FLAVOR-CORES`` are indexes of
|
||||
*guest* vCPUs and may not correspond to *host* CPUs. As such, this
|
||||
feature cannot be used to constrain instances to specific host CPUs or
|
||||
NUMA nodes.
|
||||
|
||||
.. warning::
|
||||
|
||||
If the combined values of ``hw:numa_cpus.N`` or ``hw:numa_mem.N``
|
||||
are greater than the available number of CPUs or memory respectively,
|
||||
an exception is raised.
|
||||
|
||||
Large pages allocation
|
||||
You can configure the size of large pages used to back the VMs.
|
||||
|
||||
.. code:: console
|
||||
|
||||
$ openstack flavor set FLAVOR-NAME \
|
||||
--property hw:mem_page_size=PAGE_SIZE
|
||||
|
||||
Valid ``PAGE_SIZE`` values are:
|
||||
|
||||
- ``small``: (default) The smallest page size is used.
|
||||
Example: 4 KB on x86.
|
||||
- ``large``: Only use larger page sizes for guest RAM.
|
||||
Example: either 2 MB or 1 GB on x86.
|
||||
- ``any``: It is left up to the compute driver to decide. In this case,
|
||||
the libvirt driver might try to find large pages, but fall back to small
|
||||
pages. Other drivers may choose alternate policies for ``any``.
|
||||
- pagesize: (string) An explicit page size can be set if the workload has
|
||||
specific requirements. This value can be an integer value for the page
|
||||
size in KB, or can use any standard suffix.
|
||||
Example: ``4KB``, ``2MB``, ``2048``, ``1GB``.
|
||||
|
||||
.. note::
|
||||
|
||||
Large pages can be enabled for guest RAM without any regard to whether
|
||||
the guest OS will use them or not. If the guest OS chooses not to
|
||||
use huge pages, it will merely see small pages as before. Conversely,
|
||||
if a guest OS does intend to use huge pages, it is very important that
|
||||
the guest RAM be backed by huge pages. Otherwise, the guest OS will not
|
||||
be getting the performance benefit it is expecting.
|
||||
|
||||
PCI passthrough
|
||||
You can assign PCI devices to a guest by specifying them in the flavor.
|
||||
|
||||
.. code:: console
|
||||
|
||||
$ openstack flavor set FLAVOR-NAME \
|
||||
--property pci_passthrough:alias=ALIAS:COUNT
|
||||
|
||||
Where:
|
||||
|
||||
- ALIAS: (string) The alias which correspond to a particular PCI device
|
||||
class as configured in the nova configuration file (see `nova.conf
|
||||
configuration options <https://docs.openstack.org/ocata/config-reference/compute/config-options.html>`_).
|
||||
- COUNT: (integer) The amount of PCI devices of type ALIAS to be assigned
|
||||
to a guest.
|
||||
|
||||
Secure Boot
|
||||
When your Compute services use the Hyper-V hypervisor, you can enable
|
||||
secure boot for Windows and Linux instances.
|
||||
|
||||
.. code:: console
|
||||
|
||||
$ openstack flavor set FLAVOR-NAME \
|
||||
--property os:secure_boot=SECURE_BOOT_OPTION
|
||||
|
||||
Valid ``SECURE_BOOT_OPTION`` values are:
|
||||
|
||||
- ``required``: Enable Secure Boot for instances running with this
|
||||
flavor.
|
||||
- ``disabled`` or ``optional``: (default) Disable Secure Boot for
|
||||
instances running with this flavor.
|
@ -1,241 +0,0 @@
|
||||
.. _compute-huge-pages:
|
||||
|
||||
==========
|
||||
Huge pages
|
||||
==========
|
||||
|
||||
The huge page feature in OpenStack provides important performance improvements
|
||||
for applications that are highly memory IO-bound.
|
||||
|
||||
.. note::
|
||||
|
||||
Huge pages may also be referred to hugepages or large pages, depending on
|
||||
the source. These terms are synonyms.
|
||||
|
||||
Pages, the TLB and huge pages
|
||||
-----------------------------
|
||||
|
||||
Pages
|
||||
Physical memory is segmented into a series of contiguous regions called
|
||||
pages. Each page contains a number of bytes, referred to as the page size.
|
||||
The system retrieves memory by accessing entire pages, rather than byte by
|
||||
byte.
|
||||
|
||||
Translation Lookaside Buffer (TLB)
|
||||
A TLB is used to map the virtual addresses of pages to the physical addresses
|
||||
in actual memory. The TLB is a cache and is not limitless, storing only the
|
||||
most recent or frequently accessed pages. During normal operation, processes
|
||||
will sometimes attempt to retrieve pages that are not stored in the cache.
|
||||
This is known as a TLB miss and results in a delay as the processor iterates
|
||||
through the pages themselves to find the missing address mapping.
|
||||
|
||||
Huge Pages
|
||||
The standard page size in x86 systems is 4 kB. This is optimal for general
|
||||
purpose computing but larger page sizes - 2 MB and 1 GB - are also available.
|
||||
These larger page sizes are known as huge pages. Huge pages result in less
|
||||
efficient memory usage as a process will not generally use all memory
|
||||
available in each page. However, use of huge pages will result in fewer
|
||||
overall pages and a reduced risk of TLB misses. For processes that have
|
||||
significant memory requirements or are memory intensive, the benefits of huge
|
||||
pages frequently outweigh the drawbacks.
|
||||
|
||||
Persistent Huge Pages
|
||||
On Linux hosts, persistent huge pages are huge pages that are reserved
|
||||
upfront. The HugeTLB provides for the mechanism for this upfront
|
||||
configuration of huge pages. The HugeTLB allows for the allocation of varying
|
||||
quantities of different huge page sizes. Allocation can be made at boot time
|
||||
or run time. Refer to the `Linux hugetlbfs guide`_ for more information.
|
||||
|
||||
Transparent Huge Pages (THP)
|
||||
On Linux hosts, transparent huge pages are huge pages that are automatically
|
||||
provisioned based on process requests. Transparent huge pages are provisioned
|
||||
on a best effort basis, attempting to provision 2 MB huge pages if available
|
||||
but falling back to 4 kB small pages if not. However, no upfront
|
||||
configuration is necessary. Refer to the `Linux THP guide`_ for more
|
||||
information.
|
||||
|
||||
Enabling huge pages on the host
|
||||
-------------------------------
|
||||
|
||||
Persistent huge pages are required owing to their guaranteed availability.
|
||||
However, persistent huge pages are not enabled by default in most environments.
|
||||
The steps for enabling huge pages differ from platform to platform and only the
|
||||
steps for Linux hosts are described here. On Linux hosts, the number of
|
||||
persistent huge pages on the host can be queried by checking ``/proc/meminfo``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ grep Huge /proc/meminfo
|
||||
AnonHugePages: 0 kB
|
||||
ShmemHugePages: 0 kB
|
||||
HugePages_Total: 0
|
||||
HugePages_Free: 0
|
||||
HugePages_Rsvd: 0
|
||||
HugePages_Surp: 0
|
||||
Hugepagesize: 2048 kB
|
||||
|
||||
In this instance, there are 0 persistent huge pages (``HugePages_Total``) and 0
|
||||
transparent huge pages (``AnonHugePages``) allocated. Huge pages can be
|
||||
allocated at boot time or run time. Huge pages require a contiguous area of
|
||||
memory - memory that gets increasingly fragmented the long a host is running.
|
||||
Identifying contiguous areas of memory is a issue for all huge page sizes, but
|
||||
it is particularly problematic for larger huge page sizes such as 1 GB huge
|
||||
pages. Allocating huge pages at boot time will ensure the correct number of huge
|
||||
pages is always available, while allocating them at run time can fail if memory
|
||||
has become too fragmented.
|
||||
|
||||
To allocate huge pages at run time, the kernel boot parameters must be extended
|
||||
to include some huge page-specific parameters. This can be achieved by
|
||||
modifying ``/etc/default/grub`` and appending the ``hugepagesz``,
|
||||
``hugepages``, and ``transparent_hugepages=never`` arguments to
|
||||
``GRUB_CMDLINE_LINUX``. To allocate, for example, 2048 persistent 2 MB huge
|
||||
pages at boot time, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# echo 'GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX hugepagesz=2M hugepages=2048 transparent_hugepage=never"' > /etc/default/grub
|
||||
$ grep GRUB_CMDLINE_LINUX /etc/default/grub
|
||||
GRUB_CMDLINE_LINUX="..."
|
||||
GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX hugepagesz=2M hugepages=2048 transparent_hugepage=never"
|
||||
|
||||
.. important::
|
||||
|
||||
Persistent huge pages are not usable by standard host OS processes. Ensure
|
||||
enough free, non-huge page memory is reserved for these processes.
|
||||
|
||||
Reboot the host, then validate that huge pages are now available:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ grep "Huge" /proc/meminfo
|
||||
AnonHugePages: 0 kB
|
||||
ShmemHugePages: 0 kB
|
||||
HugePages_Total: 2048
|
||||
HugePages_Free: 2048
|
||||
HugePages_Rsvd: 0
|
||||
HugePages_Surp: 0
|
||||
Hugepagesize: 2048 kB
|
||||
|
||||
There are now 2048 2 MB huge pages totalling 4 GB of huge pages. These huge
|
||||
pages must be mounted. On most platforms, this happens automatically. To verify
|
||||
that the huge pages are mounted, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# mount | grep huge
|
||||
hugetlbfs on /dev/hugepages type hugetlbfs (rw)
|
||||
|
||||
In this instance, the huge pages are mounted at ``/dev/hugepages``. This mount
|
||||
point varies from platform to platform. If the above command did not return
|
||||
anything, the hugepages must be mounted manually. To mount the huge pages at
|
||||
``/dev/hugepages``, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# mkdir -p /dev/hugepages
|
||||
# mount -t hugetlbfs hugetlbfs /dev/hugepages
|
||||
|
||||
There are many more ways to configure huge pages, including allocating huge
|
||||
pages at run time, specifying varying allocations for different huge page
|
||||
sizes, or allocating huge pages from memory affinitized to different NUMA
|
||||
nodes. For more information on configuring huge pages on Linux hosts, refer to
|
||||
the `Linux hugetlbfs guide`_.
|
||||
|
||||
Customizing instance huge pages allocations
|
||||
-------------------------------------------
|
||||
|
||||
.. important::
|
||||
|
||||
The functionality described below is currently only supported by the
|
||||
libvirt/KVM driver.
|
||||
|
||||
.. important::
|
||||
|
||||
For performance reasons, configuring huge pages for an instance will
|
||||
implicitly result in a NUMA topology being configured for the instance.
|
||||
Configuring a NUMA topology for an instance requires enablement of
|
||||
``NUMATopologyFilter``. Refer to :doc:`compute-cpu-topologies` for more
|
||||
information.
|
||||
|
||||
By default, an instance does not use huge pages for its underlying memory.
|
||||
However, huge pages can bring important or required performance improvements
|
||||
for some workloads. Huge pages must be requested explicitly through the use of
|
||||
flavor extra specs or image metadata. To request an instance use huge pages,
|
||||
run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set m1.large --property hw:mem_page_size=large
|
||||
|
||||
Different platforms offer different huge page sizes. For example: x86-based
|
||||
platforms offer 2 MB and 1 GB huge page sizes. Specific huge page sizes can be
|
||||
also be requested, with or without a unit suffix. The unit suffix must be one
|
||||
of: Kb(it), Kib(it), Mb(it), Mib(it), Gb(it), Gib(it), Tb(it), Tib(it), KB,
|
||||
KiB, MB, MiB, GB, GiB, TB, TiB. Where a unit suffix is not provided, Kilobytes
|
||||
are assumed. To request an instance to use 2 MB huge pages, run one of:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set m1.large --property hw:mem_page_size=2Mb
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set m1.large --property hw:mem_page_size=2048
|
||||
|
||||
Enabling huge pages for an instance can have negative consequences for other
|
||||
instances by consuming limited huge pages resources. To explicitly request
|
||||
an instance use small pages, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set m1.large --property hw:mem_page_size=small
|
||||
|
||||
.. note::
|
||||
|
||||
Explicitly requesting any page size will still result in a NUMA topology
|
||||
being applied to the instance, as described earlier in this document.
|
||||
|
||||
Finally, to leave the decision of huge or small pages to the compute driver,
|
||||
run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set m1.large --property hw:mem_page_size=any
|
||||
|
||||
For more information about the syntax for ``hw:mem_page_size``, refer to the
|
||||
`Flavors`_ guide.
|
||||
|
||||
Applications are frequently packaged as images. For applications that require
|
||||
the IO performance improvements that huge pages provides, configure image
|
||||
metadata to ensure instances always request the specific page size regardless
|
||||
of flavor. To configure an image to use 1 GB huge pages, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack image set [IMAGE_ID] --property hw_mem_page_size=1GB
|
||||
|
||||
If the flavor specifies a numerical page size or a page size of "small" the
|
||||
image is not allowed to specify a page size and if it does an exception will
|
||||
be raised. If the flavor specifies a page size of ``any`` or ``large`` then
|
||||
any page size specified in the image will be used. By setting a ``small``
|
||||
page size in the flavor, administrators can prevent users requesting huge
|
||||
pages in flavors and impacting resource utilization. To configure this page
|
||||
size, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set m1.large --property hw:mem_page_size=small
|
||||
|
||||
.. note::
|
||||
|
||||
Explicitly requesting any page size will still result in a NUMA topology
|
||||
being applied to the instance, as described earlier in this document.
|
||||
|
||||
For more information about image metadata, refer to the `Image metadata`_
|
||||
guide.
|
||||
|
||||
.. Links
|
||||
.. _`Linux THP guide`: https://www.kernel.org/doc/Documentation/vm/transhuge.txt
|
||||
.. _`Linux hugetlbfs guide`: https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt
|
||||
.. _`Flavors`: https://docs.openstack.org/admin-guide/compute-flavors.html
|
||||
.. _`Image metadata`: https://docs.openstack.org/image-guide/image-metadata.html
|
@ -1,326 +0,0 @@
|
||||
.. _section_live-migration-usage:
|
||||
|
||||
======================
|
||||
Live-migrate instances
|
||||
======================
|
||||
|
||||
Live-migrating an instance means moving its virtual machine to a different
|
||||
OpenStack Compute server while the instance continues running.
|
||||
Before starting a live-migration, review the chapter
|
||||
:ref:`section_configuring-compute-migrations`. It covers
|
||||
the configuration settings required to enable live-migration,
|
||||
but also reasons for migrations and non-live-migration
|
||||
options.
|
||||
|
||||
The instructions below cover shared-storage and volume-backed migration.
|
||||
To block-migrate instances, add the command-line option
|
||||
:command:``--block-migrate``
|
||||
to the :command:``nova live-migration`` command, and
|
||||
:command:``--block-migration``
|
||||
to the :command:``openstack server migrate`` command.
|
||||
|
||||
.. _section-manual-selection-of-dest:
|
||||
|
||||
Manual selection of the destination host
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Obtain the ID of the instance you want to migrate:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server list
|
||||
|
||||
+--------------------------------------+------+--------+-----------------+------------+
|
||||
| ID | Name | Status | Networks | Image Name |
|
||||
+--------------------------------------+------+--------+-----------------+------------+
|
||||
| d1df1b5a-70c4-4fed-98b7-423362f2c47c | vm1 | ACTIVE | private=a.b.c.d | ... |
|
||||
| d693db9e-a7cf-45ef-a7c9-b3ecb5f22645 | vm2 | ACTIVE | private=e.f.g.h | ... |
|
||||
+--------------------------------------+------+--------+-----------------+------------+
|
||||
|
||||
#. Determine on which host the instance is currently running. In this example,
|
||||
``vm1`` is running on ``HostB``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server show d1df1b5a-70c4-4fed-98b7-423362f2c47c
|
||||
|
||||
+----------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+----------------------+--------------------------------------+
|
||||
| ... | ... |
|
||||
| OS-EXT-SRV-ATTR:host | HostB |
|
||||
| ... | ... |
|
||||
| addresses | a.b.c.d |
|
||||
| flavor | m1.tiny |
|
||||
| id | d1df1b5a-70c4-4fed-98b7-423362f2c47c |
|
||||
| name | vm1 |
|
||||
| status | ACTIVE |
|
||||
| ... | ... |
|
||||
+----------------------+--------------------------------------+
|
||||
|
||||
#. Select the compute node the instance will be migrated to. In this
|
||||
example, we will migrate the instance to ``HostC``, because
|
||||
``nova-compute`` is running on it:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack compute service list
|
||||
|
||||
+----+------------------+-------+----------+---------+-------+----------------------------+
|
||||
| ID | Binary | Host | Zone | Status | State | Updated At |
|
||||
+----+------------------+-------+----------+---------+-------+----------------------------+
|
||||
| 3 | nova-conductor | HostA | internal | enabled | up | 2017-02-18T09:42:29.000000 |
|
||||
| 4 | nova-scheduler | HostA | internal | enabled | up | 2017-02-18T09:42:26.000000 |
|
||||
| 5 | nova-consoleauth | HostA | internal | enabled | up | 2017-02-18T09:42:29.000000 |
|
||||
| 6 | nova-compute | HostB | nova | enabled | up | 2017-02-18T09:42:29.000000 |
|
||||
| 7 | nova-compute | HostC | nova | enabled | up | 2017-02-18T09:42:29.000000 |
|
||||
+----+------------------+-------+----------+---------+-------+----------------------------+
|
||||
|
||||
#. Check that ``HostC`` has enough resources for migration:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack host show HostC
|
||||
|
||||
+-------+------------+-----+-----------+---------+
|
||||
| Host | Project | CPU | Memory MB | Disk GB |
|
||||
+-------+------------+-----+-----------+---------+
|
||||
| HostC | (total) | 16 | 32232 | 878 |
|
||||
| HostC | (used_now) | 22 | 21284 | 422 |
|
||||
| HostC | (used_max) | 22 | 21284 | 422 |
|
||||
| HostC | p1 | 22 | 21284 | 422 |
|
||||
| HostC | p2 | 22 | 21284 | 422 |
|
||||
+-------+------------+-----+-----------+---------+
|
||||
|
||||
- ``cpu``: Number of CPUs
|
||||
|
||||
- ``memory_mb``: Total amount of memory, in MB
|
||||
|
||||
- ``disk_gb``: Total amount of space for NOVA-INST-DIR/instances, in GB
|
||||
|
||||
In this table, the first row shows the total amount of resources
|
||||
available on the physical server. The second line shows the currently
|
||||
used resources. The third line shows the maximum used resources. The
|
||||
fourth line and below shows the resources available for each project.
|
||||
|
||||
#. Migrate the instance:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server migrate d1df1b5a-70c4-4fed-98b7-423362f2c47c --live HostC
|
||||
|
||||
#. Confirm that the instance has been migrated successfully:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server show d1df1b5a-70c4-4fed-98b7-423362f2c47c
|
||||
|
||||
+----------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+----------------------+--------------------------------------+
|
||||
| ... | ... |
|
||||
| OS-EXT-SRV-ATTR:host | HostC |
|
||||
| ... | ... |
|
||||
+----------------------+--------------------------------------+
|
||||
|
||||
If the instance is still running on ``HostB``, the migration failed. The
|
||||
``nova-scheduler`` and ``nova-conductor`` log files on the controller
|
||||
and the ``nova-compute`` log file on the source compute host can help
|
||||
pin-point the problem.
|
||||
|
||||
.. _auto_selection_of_dest:
|
||||
|
||||
Automatic selection of the destination host
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To leave the selection of the destination host to the Compute service, use the
|
||||
nova command-line client.
|
||||
|
||||
#. Obtain the instance ID as shown in step 1 of the section
|
||||
:ref:`section-manual-selection-of-dest`.
|
||||
|
||||
#. Leave out the host selection steps 2, 3, and 4.
|
||||
|
||||
#. Migrate the instance:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova live-migration d1df1b5a-70c4-4fed-98b7-423362f2c47c
|
||||
|
||||
Monitoring the migration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Confirm that the instance is migrating:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server show d1df1b5a-70c4-4fed-98b7-423362f2c47c
|
||||
|
||||
+----------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+----------------------+--------------------------------------+
|
||||
| ... | ... |
|
||||
| status | MIGRATING |
|
||||
| ... | ... |
|
||||
+----------------------+--------------------------------------+
|
||||
|
||||
#. Check progress
|
||||
|
||||
Use the nova command-line client for nova's migration monitoring feature.
|
||||
First, obtain the migration ID:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova server-migration-list d1df1b5a-70c4-4fed-98b7-423362f2c47c
|
||||
+----+-------------+----------- (...)
|
||||
| Id | Source Node | Dest Node | (...)
|
||||
+----+-------------+-----------+ (...)
|
||||
| 2 | - | - | (...)
|
||||
+----+-------------+-----------+ (...)
|
||||
|
||||
For readability, most output columns were removed. Only the first column, **Id**,
|
||||
is relevant.
|
||||
In this example, the migration ID is 2. Use this to get the migration
|
||||
status.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova server-migration-show d1df1b5a-70c4-4fed-98b7-423362f2c47c 2
|
||||
+------------------------+--------------------------------------+
|
||||
| Property | Value |
|
||||
+------------------------+--------------------------------------+
|
||||
| created_at | 2017-03-08T02:53:06.000000 |
|
||||
| dest_compute | controller |
|
||||
| dest_host | - |
|
||||
| dest_node | - |
|
||||
| disk_processed_bytes | 0 |
|
||||
| disk_remaining_bytes | 0 |
|
||||
| disk_total_bytes | 0 |
|
||||
| id | 2 |
|
||||
| memory_processed_bytes | 65502513 |
|
||||
| memory_remaining_bytes | 786427904 |
|
||||
| memory_total_bytes | 1091379200 |
|
||||
| server_uuid | d1df1b5a-70c4-4fed-98b7-423362f2c47c |
|
||||
| source_compute | compute2 |
|
||||
| source_node | - |
|
||||
| status | running |
|
||||
| updated_at | 2017-03-08T02:53:47.000000 |
|
||||
+------------------------+--------------------------------------+
|
||||
|
||||
The output shows that the migration is running. Progress is measured
|
||||
by the number of memory bytes that remain to be copied. If
|
||||
this number is not decreasing over time, the migration may be unable
|
||||
to complete, and it may be aborted by the Compute service.
|
||||
|
||||
.. note::
|
||||
|
||||
The command reports that no disk bytes are processed, even in the
|
||||
event of block migration.
|
||||
|
||||
What to do when the migration times out
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
During the migration process, the instance may write to a memory page after
|
||||
that page has been copied to the destination. When that happens, the same
|
||||
page has to be copied again. The instance may write to memory pages faster
|
||||
than they can be copied, so that the migration cannot complete.
|
||||
The Compute service will
|
||||
cancel it when the ``live_migration_completion_timeout``,
|
||||
a configuration parameter, is reached.
|
||||
|
||||
The following remarks assume the KVM/Libvirt hypervisor.
|
||||
|
||||
How to know that the migration timed out
|
||||
----------------------------------------
|
||||
|
||||
To determine that the migration timed out, inspect the
|
||||
``nova-compute`` log file on the source host. The following log entry shows
|
||||
that the migration timed out:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# grep WARNING.*d1df1b5a-70c4-4fed-98b7-423362f2c47c /var/log/nova/nova-compute.log
|
||||
...
|
||||
WARNING nova.virt.libvirt.migration [req-...] [instance: ...]
|
||||
live migration not completed after 1800 sec
|
||||
|
||||
The Compute service
|
||||
also cancels migrations when the memory copy seems to make no
|
||||
progress. Ocata disables this feature by default, but it can be enabled
|
||||
using the configuration parameter
|
||||
``live_migration_progress_timeout``. Should this be the case,
|
||||
you may find the following message in the log:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
WARNING nova.virt.libvirt.migration [req-...] [instance: ...]
|
||||
live migration stuck for 150 sec
|
||||
|
||||
Addressing migration timeouts
|
||||
-----------------------------
|
||||
|
||||
To stop the migration from putting load on infrastructure resources like
|
||||
network and disks, you may opt to cancel it manually.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova live-migration-abort INSTANCE_ID MIGRATION_ID
|
||||
|
||||
To make live-migration succeed, you have several options:
|
||||
|
||||
- **Manually force-complete the migration**
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova live-migration-force-complete INSTANCE_ID MIGRATION_ID
|
||||
|
||||
The instance is paused until memory copy completes.
|
||||
|
||||
.. caution::
|
||||
|
||||
Since the pause impacts
|
||||
time keeping on the instance and not all applications
|
||||
tolerate incorrect time settings, use this approach
|
||||
with caution.
|
||||
|
||||
- **Enable auto-convergence**
|
||||
|
||||
Auto-convergence is a Libvirt feature. Libvirt detects that the migration
|
||||
is unlikely to complete and slows down its CPU until the memory copy
|
||||
process is faster than the instance's memory writes.
|
||||
|
||||
To enable auto-convergence, set
|
||||
``live_migration_permit_auto_convergence=true`` in ``nova.conf``
|
||||
and restart ``nova-compute``. Do this on all compute hosts.
|
||||
|
||||
.. caution::
|
||||
|
||||
One possible downside of auto-convergence is the slowing
|
||||
down of the instance.
|
||||
|
||||
- **Enable post-copy**
|
||||
|
||||
This is a Libvirt feature. Libvirt detects that the
|
||||
migration does not progress and responds by activating the virtual machine
|
||||
on the destination host before all its memory has been copied. Access to
|
||||
missing memory pages result in page faults that are satisfied from the
|
||||
source host.
|
||||
|
||||
To enable post-copy, set
|
||||
``live_migration_permit_post_copy=true`` in ``nova.conf``
|
||||
and restart ``nova-compute``. Do this on all compute hosts.
|
||||
|
||||
When post-copy is enabled, manual force-completion does not pause the
|
||||
instance but switches to the post-copy process.
|
||||
|
||||
.. caution::
|
||||
|
||||
Possible downsides:
|
||||
|
||||
- When the network connection between source and
|
||||
destination is interrupted, page faults cannot be resolved anymore,
|
||||
and the virtual machine is rebooted.
|
||||
|
||||
- Post-copy may lead to an
|
||||
increased page fault rate during migration,
|
||||
which can slow the instance down.
|
@ -1,236 +0,0 @@
|
||||
.. _section_manage-logs:
|
||||
|
||||
=======
|
||||
Logging
|
||||
=======
|
||||
|
||||
Logging module
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
Logging behavior can be changed by creating a configuration file. To
|
||||
specify the configuration file, add this line to the
|
||||
``/etc/nova/nova.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
log-config=/etc/nova/logging.conf
|
||||
|
||||
To change the logging level, add ``DEBUG``, ``INFO``, ``WARNING``, or
|
||||
``ERROR`` as a parameter.
|
||||
|
||||
The logging configuration file is an INI-style configuration file, which
|
||||
must contain a section called ``logger_nova``. This controls the
|
||||
behavior of the logging facility in the ``nova-*`` services. For
|
||||
example:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[logger_nova]
|
||||
level = INFO
|
||||
handlers = stderr
|
||||
qualname = nova
|
||||
|
||||
This example sets the debugging level to ``INFO`` (which is less verbose
|
||||
than the default ``DEBUG`` setting).
|
||||
|
||||
For more about the logging configuration syntax, including the
|
||||
``handlers`` and ``quaname`` variables, see the
|
||||
`Python documentation <https://docs.python.org/release/2.7/library/logging.html#configuration-file-format>`__
|
||||
on logging configuration files.
|
||||
|
||||
For an example of the ``logging.conf`` file with various defined handlers, see
|
||||
the `OpenStack Configuration Reference <https://docs.openstack.org/ocata/config-reference/>`__.
|
||||
|
||||
Syslog
|
||||
~~~~~~
|
||||
|
||||
OpenStack Compute services can send logging information to syslog. This
|
||||
is useful if you want to use rsyslog to forward logs to a remote
|
||||
machine. Separately configure the Compute service (nova), the Identity
|
||||
service (keystone), the Image service (glance), and, if you are using
|
||||
it, the Block Storage service (cinder) to send log messages to syslog.
|
||||
Open these configuration files:
|
||||
|
||||
- ``/etc/nova/nova.conf``
|
||||
|
||||
- ``/etc/keystone/keystone.conf``
|
||||
|
||||
- ``/etc/glance/glance-api.conf``
|
||||
|
||||
- ``/etc/glance/glance-registry.conf``
|
||||
|
||||
- ``/etc/cinder/cinder.conf``
|
||||
|
||||
In each configuration file, add these lines:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
debug = False
|
||||
use_syslog = True
|
||||
syslog_log_facility = LOG_LOCAL0
|
||||
|
||||
In addition to enabling syslog, these settings also turn off debugging output
|
||||
from the log.
|
||||
|
||||
.. note::
|
||||
|
||||
Although this example uses the same local facility for each service
|
||||
(``LOG_LOCAL0``, which corresponds to syslog facility ``LOCAL0``),
|
||||
we recommend that you configure a separate local facility for each
|
||||
service, as this provides better isolation and more flexibility. For
|
||||
example, you can capture logging information at different severity
|
||||
levels for different services. syslog allows you to define up to
|
||||
eight local facilities, ``LOCAL0, LOCAL1, ..., LOCAL7``. For more
|
||||
information, see the syslog documentation.
|
||||
|
||||
Rsyslog
|
||||
~~~~~~~
|
||||
|
||||
rsyslog is useful for setting up a centralized log server across
|
||||
multiple machines. This section briefly describe the configuration to
|
||||
set up an rsyslog server. A full treatment of rsyslog is beyond the
|
||||
scope of this book. This section assumes rsyslog has already been
|
||||
installed on your hosts (it is installed by default on most Linux
|
||||
distributions).
|
||||
|
||||
This example provides a minimal configuration for ``/etc/rsyslog.conf``
|
||||
on the log server host, which receives the log files
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# provides TCP syslog reception
|
||||
$ModLoad imtcp
|
||||
$InputTCPServerRun 1024
|
||||
|
||||
Add a filter rule to ``/etc/rsyslog.conf`` which looks for a host name.
|
||||
This example uses COMPUTE_01 as the compute host name:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
:hostname, isequal, "COMPUTE_01" /mnt/rsyslog/logs/compute-01.log
|
||||
|
||||
On each compute host, create a file named
|
||||
``/etc/rsyslog.d/60-nova.conf``, with the following content:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# prevent debug from dnsmasq with the daemon.none parameter
|
||||
*.*;auth,authpriv.none,daemon.none,local0.none -/var/log/syslog
|
||||
# Specify a log level of ERROR
|
||||
local0.error @@172.20.1.43:1024
|
||||
|
||||
Once you have created the file, restart the ``rsyslog`` service. Error-level
|
||||
log messages on the compute hosts should now be sent to the log server.
|
||||
|
||||
Serial console
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
The serial console provides a way to examine kernel output and other
|
||||
system messages during troubleshooting if the instance lacks network
|
||||
connectivity.
|
||||
|
||||
Read-only access from server serial console is possible
|
||||
using the ``os-GetSerialOutput`` server action. Most
|
||||
cloud images enable this feature by default. For more information, see
|
||||
:ref:`compute-common-errors-and-fixes`.
|
||||
|
||||
OpenStack Juno and later supports read-write access using the serial
|
||||
console using the ``os-GetSerialConsole`` server action. This feature
|
||||
also requires a websocket client to access the serial console.
|
||||
|
||||
**Configuring read-write serial console access**
|
||||
|
||||
#. On a compute node, edit the ``/etc/nova/nova.conf`` file:
|
||||
|
||||
In the ``[serial_console]`` section, enable the serial console:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[serial_console]
|
||||
# ...
|
||||
enabled = true
|
||||
|
||||
#. In the ``[serial_console]`` section, configure the serial console proxy
|
||||
similar to graphical console proxies:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[serial_console]
|
||||
# ...
|
||||
base_url = ws://controller:6083/
|
||||
listen = 0.0.0.0
|
||||
proxyclient_address = MANAGEMENT_INTERFACE_IP_ADDRESS
|
||||
|
||||
The ``base_url`` option specifies the base URL that clients receive from
|
||||
the API upon requesting a serial console. Typically, this refers to the
|
||||
host name of the controller node.
|
||||
|
||||
The ``listen`` option specifies the network interface nova-compute
|
||||
should listen on for virtual console connections. Typically, 0.0.0.0
|
||||
will enable listening on all interfaces.
|
||||
|
||||
The ``proxyclient_address`` option specifies which network interface the
|
||||
proxy should connect to. Typically, this refers to the IP address of the
|
||||
management interface.
|
||||
|
||||
When you enable read-write serial console access, Compute will add
|
||||
serial console information to the Libvirt XML file for the instance. For
|
||||
example:
|
||||
|
||||
.. code-block:: xml
|
||||
|
||||
<console type='tcp'>
|
||||
<source mode='bind' host='127.0.0.1' service='10000'/>
|
||||
<protocol type='raw'/>
|
||||
<target type='serial' port='0'/>
|
||||
<alias name='serial0'/>
|
||||
</console>
|
||||
|
||||
**Accessing the serial console on an instance**
|
||||
|
||||
#. Use the :command:`nova get-serial-proxy` command to retrieve the websocket
|
||||
URL for the serial console on the instance:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova get-serial-proxy INSTANCE_NAME
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 0
|
||||
:widths: 9 65
|
||||
|
||||
* - Type
|
||||
- Url
|
||||
* - serial
|
||||
- ws://127.0.0.1:6083/?token=18510769-71ad-4e5a-8348-4218b5613b3d
|
||||
|
||||
Alternatively, use the API directly:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ curl -i 'http://<controller>:8774/v2.1/<tenant_uuid>/servers/<instance_uuid>/action' \
|
||||
-X POST \
|
||||
-H "Accept: application/json" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "X-Auth-Project-Id: <project_id>" \
|
||||
-H "X-Auth-Token: <auth_token>" \
|
||||
-d '{"os-getSerialConsole": {"type": "serial"}}'
|
||||
|
||||
#. Use Python websocket with the URL to generate ``.send``, ``.recv``, and
|
||||
``.fileno`` methods for serial console access. For example:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import websocket
|
||||
ws = websocket.create_connection(
|
||||
'ws://127.0.0.1:6083/?token=18510769-71ad-4e5a-8348-4218b5613b3d',
|
||||
subprotocols=['binary', 'base64'])
|
||||
|
||||
Alternatively, use a `Python websocket client <https://github.com/larsks/novaconsole/>`__.
|
||||
|
||||
.. note::
|
||||
|
||||
When you enable the serial console, typical instance logging using
|
||||
the :command:`nova console-log` command is disabled. Kernel output
|
||||
and other system messages will not be visible unless you are
|
||||
actively viewing the serial console.
|
@ -1,69 +0,0 @@
|
||||
.. _section_manage-the-cloud:
|
||||
|
||||
================
|
||||
Manage the cloud
|
||||
================
|
||||
|
||||
.. toctree::
|
||||
|
||||
compute-euca2ools.rst
|
||||
common/nova-show-usage-statistics-for-hosts-instances.rst
|
||||
|
||||
System administrators can use the :command:`openstack` and
|
||||
:command:`euca2ools` commands to manage their clouds.
|
||||
|
||||
The ``openstack`` client and ``euca2ools`` can be used by all users, though
|
||||
specific commands might be restricted by the Identity service.
|
||||
|
||||
**Managing the cloud with the openstack client**
|
||||
|
||||
#. The ``python-openstackclient`` package provides an ``openstack`` shell that
|
||||
enables Compute API interactions from the command line. Install the client,
|
||||
and provide your user name and password (which can be set as environment
|
||||
variables for convenience), for the ability to administer the cloud from
|
||||
the command line.
|
||||
|
||||
To install python-openstackclient, follow the instructions in the
|
||||
`OpenStack User Guide
|
||||
<https://docs.openstack.org/user-guide/common/cli-install-openstack-command-line-clients.html>`_.
|
||||
|
||||
#. Confirm the installation was successful:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack help
|
||||
usage: openstack [--version] [-v | -q] [--log-file LOG_FILE] [-h] [--debug]
|
||||
[--os-cloud <cloud-config-name>]
|
||||
[--os-region-name <auth-region-name>]
|
||||
[--os-cacert <ca-bundle-file>] [--verify | --insecure]
|
||||
[--os-default-domain <auth-domain>]
|
||||
...
|
||||
|
||||
Running :command:`openstack help` returns a list of ``openstack`` commands
|
||||
and parameters. To get help for a subcommand, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack help SUBCOMMAND
|
||||
|
||||
For a complete list of ``openstack`` commands and parameters, see the
|
||||
`OpenStack Command-Line Reference
|
||||
<https://docs.openstack.org/cli-reference/openstack.html>`__.
|
||||
|
||||
#. Set the required parameters as environment variables to make running
|
||||
commands easier. For example, you can add ``--os-username`` as an
|
||||
``openstack`` option, or set it as an environment variable. To set the user
|
||||
name, password, and project as environment variables, use:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ export OS_USERNAME=joecool
|
||||
$ export OS_PASSWORD=coolword
|
||||
$ export OS_TENANT_NAME=coolu
|
||||
|
||||
#. The Identity service gives you an authentication endpoint,
|
||||
which Compute recognizes as ``OS_AUTH_URL``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ export OS_AUTH_URL=http://hostname:5000/v2.0
|
@ -1,14 +0,0 @@
|
||||
.. _section_manage-compute-users:
|
||||
|
||||
====================
|
||||
Manage Compute users
|
||||
====================
|
||||
|
||||
Access to the Euca2ools (ec2) API is controlled by an access key and a
|
||||
secret key. The user's access key needs to be included in the request,
|
||||
and the request must be signed with the secret key. Upon receipt of API
|
||||
requests, Compute verifies the signature and runs commands on behalf of
|
||||
the user.
|
||||
|
||||
To begin using Compute, you must create a user with the Identity
|
||||
service.
|
@ -1,54 +0,0 @@
|
||||
==============
|
||||
Manage volumes
|
||||
==============
|
||||
|
||||
Depending on the setup of your cloud provider, they may give you an
|
||||
endpoint to use to manage volumes, or there may be an extension under
|
||||
the covers. In either case, you can use the ``openstack`` CLI to manage
|
||||
volumes.
|
||||
|
||||
.. list-table:: **openstack volume commands**
|
||||
:header-rows: 1
|
||||
|
||||
* - Command
|
||||
- Description
|
||||
* - server add volume
|
||||
- Attach a volume to a server.
|
||||
* - volume create
|
||||
- Add a new volume.
|
||||
* - volume delete
|
||||
- Remove or delete a volume.
|
||||
* - server remove volume
|
||||
- Detach or remove a volume from a server.
|
||||
* - volume list
|
||||
- List all the volumes.
|
||||
* - volume show
|
||||
- Show details about a volume.
|
||||
* - snapshot create
|
||||
- Add a new snapshot.
|
||||
* - snapshot delete
|
||||
- Remove a snapshot.
|
||||
* - snapshot list
|
||||
- List all the snapshots.
|
||||
* - snapshot show
|
||||
- Show details about a snapshot.
|
||||
* - volume type create
|
||||
- Create a new volume type.
|
||||
* - volume type delete
|
||||
- Delete a specific flavor
|
||||
* - volume type list
|
||||
- Print a list of available 'volume types'.
|
||||
|
||||
|
|
||||
|
||||
For example, to list IDs and names of volumes, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume list
|
||||
+--------+--------------+-----------+------+-------------+
|
||||
| ID | Display Name | Status | Size | Attached to |
|
||||
+--------+--------------+-----------+------+-------------+
|
||||
| 86e6cb | testnfs | available | 1 | |
|
||||
| e389f7 | demo | available | 1 | |
|
||||
+--------+--------------+-----------+------+-------------+
|
File diff suppressed because it is too large
Load Diff
@ -1,336 +0,0 @@
|
||||
.. _section_nova-compute-node-down:
|
||||
|
||||
==================================
|
||||
Recover from a failed compute node
|
||||
==================================
|
||||
|
||||
If you deploy Compute with a shared file system, you can use several methods
|
||||
to quickly recover from a node failure. This section discusses manual
|
||||
recovery.
|
||||
|
||||
Evacuate instances
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If a hardware malfunction or other error causes the cloud compute node to
|
||||
fail, you can use the :command:`nova evacuate` command to evacuate instances.
|
||||
See the `OpenStack Administrator Guide <https://docs.openstack.org/admin-guide/cli-nova-evacuate.html>`__.
|
||||
|
||||
.. _nova-compute-node-down-manual-recovery:
|
||||
|
||||
Manual recovery
|
||||
~~~~~~~~~~~~~~~
|
||||
To manually recover a failed compute node:
|
||||
|
||||
#. Identify the VMs on the affected hosts by using a combination of
|
||||
the :command:`openstack server list` and :command:`openstack server show`
|
||||
commands or the :command:`euca-describe-instances` command.
|
||||
|
||||
For example, this command displays information about the i-000015b9
|
||||
instance that runs on the np-rcc54 node:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ euca-describe-instances
|
||||
i-000015b9 at3-ui02 running nectarkey (376, np-rcc54) 0 m1.xxlarge 2012-06-19T00:48:11.000Z 115.146.93.60
|
||||
|
||||
#. Query the Compute database for the status of the host. This example
|
||||
converts an EC2 API instance ID to an OpenStack ID. If you use the
|
||||
:command:`nova` commands, you can substitute the ID directly. This example
|
||||
output is truncated:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
mysql> SELECT * FROM instances WHERE id = CONV('15b9', 16, 10) \G;
|
||||
*************************** 1. row ***************************
|
||||
created_at: 2012-06-19 00:48:11
|
||||
updated_at: 2012-07-03 00:35:11
|
||||
deleted_at: NULL
|
||||
...
|
||||
id: 5561
|
||||
...
|
||||
power_state: 5
|
||||
vm_state: shutoff
|
||||
...
|
||||
hostname: at3-ui02
|
||||
host: np-rcc54
|
||||
...
|
||||
uuid: 3f57699a-e773-4650-a443-b4b37eed5a06
|
||||
...
|
||||
task_state: NULL
|
||||
...
|
||||
|
||||
.. note::
|
||||
|
||||
Find the credentials for your database in ``/etc/nova.conf`` file.
|
||||
|
||||
#. Decide to which compute host to move the affected VM. Run this database
|
||||
command to move the VM to that host:
|
||||
|
||||
.. code-block:: mysql
|
||||
|
||||
mysql> UPDATE instances SET host = 'np-rcc46' WHERE uuid = '3f57699a-e773-4650-a443-b4b37eed5a06';
|
||||
|
||||
#. If you use a hypervisor that relies on libvirt, such as KVM, update the
|
||||
``libvirt.xml`` file in ``/var/lib/nova/instances/[instance ID]`` with
|
||||
these changes:
|
||||
|
||||
- Change the ``DHCPSERVER`` value to the host IP address of the new
|
||||
compute host.
|
||||
|
||||
- Update the VNC IP to ``0.0.0.0``.
|
||||
|
||||
#. Reboot the VM:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server reboot 3f57699a-e773-4650-a443-b4b37eed5a06
|
||||
|
||||
Typically, the database update and :command:`openstack server reboot` command
|
||||
recover a VM from a failed host. However, if problems persist, try one of
|
||||
these actions:
|
||||
|
||||
* Use :command:`virsh` to recreate the network filter configuration.
|
||||
* Restart Compute services.
|
||||
* Update the ``vm_state`` and ``power_state`` fields in the Compute database.
|
||||
|
||||
.. _section_nova-uid-mismatch:
|
||||
|
||||
Recover from a UID/GID mismatch
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Sometimes when you run Compute with a shared file system or an automated
|
||||
configuration tool, files on your compute node might use the wrong UID or GID.
|
||||
This UID or GID mismatch can prevent you from running live migrations or
|
||||
starting virtual machines.
|
||||
|
||||
This procedure runs on ``nova-compute`` hosts, based on the KVM hypervisor:
|
||||
|
||||
#. Set the nova UID to the same number in ``/etc/passwd`` on all hosts. For
|
||||
example, set the UID to ``112``.
|
||||
|
||||
.. note::
|
||||
|
||||
Choose UIDs or GIDs that are not in use for other users or groups.
|
||||
|
||||
#. Set the ``libvirt-qemu`` UID to the same number in the ``/etc/passwd`` file
|
||||
on all hosts. For example, set the UID to ``119``.
|
||||
|
||||
#. Set the ``nova`` group to the same number in the ``/etc/group`` file on all
|
||||
hosts. For example, set the group to ``120``.
|
||||
|
||||
#. Set the ``libvirtd`` group to the same number in the ``/etc/group`` file on
|
||||
all hosts. For example, set the group to ``119``.
|
||||
|
||||
#. Stop the services on the compute node.
|
||||
|
||||
#. Change all files that the nova user or group owns. For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# find / -uid 108 -exec chown nova {} \;
|
||||
# note the 108 here is the old nova UID before the change
|
||||
# find / -gid 120 -exec chgrp nova {} \;
|
||||
|
||||
#. Repeat all steps for the ``libvirt-qemu`` files, if required.
|
||||
|
||||
#. Restart the services.
|
||||
|
||||
#. To verify that all files use the correct IDs, run the :command:`find`
|
||||
command.
|
||||
|
||||
.. _section_nova-disaster-recovery-process:
|
||||
|
||||
Recover cloud after disaster
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to manage your cloud after a disaster and back up
|
||||
persistent storage volumes. Backups are mandatory, even outside of disaster
|
||||
scenarios.
|
||||
|
||||
For a definition of a disaster recovery plan (DRP), see
|
||||
`https://en.wikipedia.org/wiki/Disaster\_Recovery\_Plan <https://en.wikipedia.org/wiki/Disaster_Recovery_Plan>`_.
|
||||
|
||||
A disk crash, network loss, or power failure can affect several components in
|
||||
your cloud architecture. The worst disaster for a cloud is a power loss. A
|
||||
power loss affects these components:
|
||||
|
||||
- A cloud controller (``nova-api``, ``nova-objectstore``, ``nova-network``)
|
||||
|
||||
- A compute node (``nova-compute``)
|
||||
|
||||
- A storage area network (SAN) used by OpenStack Block Storage
|
||||
(``cinder-volumes``)
|
||||
|
||||
Before a power loss:
|
||||
|
||||
- Create an active iSCSI session from the SAN to the cloud controller
|
||||
(used for the ``cinder-volumes`` LVM's VG).
|
||||
|
||||
- Create an active iSCSI session from the cloud controller to the compute
|
||||
node (managed by ``cinder-volume``).
|
||||
|
||||
- Create an iSCSI session for every volume (so 14 EBS volumes requires 14
|
||||
iSCSI sessions).
|
||||
|
||||
- Create ``iptables`` or ``ebtables`` rules from the cloud controller to the
|
||||
compute node. This allows access from the cloud controller to the
|
||||
running instance.
|
||||
|
||||
- Save the current state of the database, the current state of the running
|
||||
instances, and the attached volumes (mount point, volume ID, volume
|
||||
status, etc), at least from the cloud controller to the compute node.
|
||||
|
||||
After power resumes and all hardware components restart:
|
||||
|
||||
- The iSCSI session from the SAN to the cloud no longer exists.
|
||||
|
||||
- The iSCSI session from the cloud controller to the compute node no
|
||||
longer exists.
|
||||
|
||||
- nova-network reapplies configurations on boot and, as a result, recreates
|
||||
the iptables and ebtables from the cloud controller to the compute node.
|
||||
|
||||
- Instances stop running.
|
||||
|
||||
Instances are not lost because neither ``destroy`` nor ``terminate`` ran.
|
||||
The files for the instances remain on the compute node.
|
||||
|
||||
- The database does not update.
|
||||
|
||||
**Begin recovery**
|
||||
|
||||
.. warning::
|
||||
|
||||
Do not add any steps or change the order of steps in this procedure.
|
||||
|
||||
#. Check the current relationship between the volume and its instance, so
|
||||
that you can recreate the attachment.
|
||||
|
||||
Use the :command:`openstack volume list` command to get this information.
|
||||
Note that the :command:`openstack` client can get volume information
|
||||
from OpenStack Block Storage.
|
||||
|
||||
#. Update the database to clean the stalled state. Do this for every
|
||||
volume by using these queries:
|
||||
|
||||
.. code-block:: mysql
|
||||
|
||||
mysql> use cinder;
|
||||
mysql> update volumes set mountpoint=NULL;
|
||||
mysql> update volumes set status="available" where status <>"error_deleting";
|
||||
mysql> update volumes set attach_status="detached";
|
||||
mysql> update volumes set instance_id=0;
|
||||
|
||||
Use :command:`openstack volume list` command to list all volumes.
|
||||
|
||||
#. Restart the instances by using the
|
||||
:command:`openstack server reboot INSTANCE` command.
|
||||
|
||||
.. important::
|
||||
|
||||
Some instances completely reboot and become reachable, while some might
|
||||
stop at the plymouth stage. This is expected behavior. DO NOT reboot a
|
||||
second time.
|
||||
|
||||
Instance state at this stage depends on whether you added an
|
||||
`/etc/fstab` entry for that volume. Images built with the cloud-init
|
||||
package remain in a ``pending`` state, while others skip the missing
|
||||
volume and start. You perform this step to ask Compute to reboot every
|
||||
instance so that the stored state is preserved. It does not matter if
|
||||
not all instances come up successfully. For more information about
|
||||
cloud-init, see
|
||||
`help.ubuntu.com/community/CloudInit/ <https://help.ubuntu.com/community/CloudInit/>`__.
|
||||
|
||||
#. If required, run the :command:`openstack server add volume` command to
|
||||
reattach the volumes to their respective instances. This example uses
|
||||
a file of listed volumes to reattach them:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
#!/bin/bash
|
||||
|
||||
while read line; do
|
||||
volume=`echo $line | $CUT -f 1 -d " "`
|
||||
instance=`echo $line | $CUT -f 2 -d " "`
|
||||
mount_point=`echo $line | $CUT -f 3 -d " "`
|
||||
echo "ATTACHING VOLUME FOR INSTANCE - $instance"
|
||||
openstack server add volume $instance $volume $mount_point
|
||||
sleep 2
|
||||
done < $volumes_tmp_file
|
||||
|
||||
Instances that were stopped at the plymouth stage now automatically
|
||||
continue booting and start normally. Instances that previously started
|
||||
successfully can now see the volume.
|
||||
|
||||
#. Log in to the instances with SSH and reboot them.
|
||||
|
||||
If some services depend on the volume or if a volume has an entry in fstab,
|
||||
you can now restart the instance. Restart directly from the instance itself
|
||||
and not through :command:`nova`:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# shutdown -r now
|
||||
|
||||
When you plan for and complete a disaster recovery, follow these tips:
|
||||
|
||||
- Use the ``errors=remount`` option in the ``fstab`` file to prevent
|
||||
data corruption.
|
||||
|
||||
In the event of an I/O error, this option prevents writes to the disk. Add
|
||||
this configuration option into the cinder-volume server that performs the
|
||||
iSCSI connection to the SAN and into the instances' ``fstab`` files.
|
||||
|
||||
- Do not add the entry for the SAN's disks to the cinder-volume's
|
||||
``fstab`` file.
|
||||
|
||||
Some systems hang on that step, which means you could lose access to
|
||||
your cloud-controller. To re-run the session manually, run this
|
||||
command before performing the mount:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# iscsiadm -m discovery -t st -p $SAN_IP $ iscsiadm -m node --target-name $IQN -p $SAN_IP -l
|
||||
|
||||
- On your instances, if you have the whole ``/home/`` directory on the
|
||||
disk, leave a user's directory with the user's bash files and the
|
||||
``authorized_keys`` file instead of emptying the ``/home/`` directory
|
||||
and mapping the disk on it.
|
||||
|
||||
This action enables you to connect to the instance without the volume
|
||||
attached, if you allow only connections through public keys.
|
||||
|
||||
To script the disaster recovery plan (DRP), use the
|
||||
`https://github.com/Razique <https://github.com/Razique/BashStuff/blob/master/SYSTEMS/OpenStack/SCR_5006_V00_NUAC-OPENSTACK-DRP-OpenStack.sh>`_ bash script.
|
||||
|
||||
This script completes these steps:
|
||||
|
||||
#. Creates an array for instances and their attached volumes.
|
||||
|
||||
#. Updates the MySQL database.
|
||||
|
||||
#. Restarts all instances with euca2ools.
|
||||
|
||||
#. Reattaches the volumes.
|
||||
|
||||
#. Uses Compute credentials to make an SSH connection into every instance.
|
||||
|
||||
The script includes a ``test mode``, which enables you to perform the sequence
|
||||
for only one instance.
|
||||
|
||||
To reproduce the power loss, connect to the compute node that runs that
|
||||
instance and close the iSCSI session. Do not detach the volume by using the
|
||||
:command:`openstack server remove volume` command. You must manually close the
|
||||
iSCSI session. This example closes an iSCSI session with the number ``15``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# iscsiadm -m session -u -r 15
|
||||
|
||||
Do not forget the ``-r`` option. Otherwise, all sessions close.
|
||||
|
||||
.. warning::
|
||||
|
||||
There is potential for data loss while running instances during
|
||||
this procedure. If you are using Liberty or earlier, ensure you have the
|
||||
correct patch and set the options appropriately.
|
@ -1,146 +0,0 @@
|
||||
.. _section-compute-pci-passthrough:
|
||||
|
||||
========================================
|
||||
Attaching physical PCI devices to guests
|
||||
========================================
|
||||
|
||||
The PCI passthrough feature in OpenStack allows full access and direct control
|
||||
of a physical PCI device in guests. This mechanism is generic for any kind of
|
||||
PCI device, and runs with a Network Interface Card (NIC), Graphics Processing
|
||||
Unit (GPU), or any other devices that can be attached to a PCI bus. Correct
|
||||
driver installation is the only requirement for the guest to properly
|
||||
use the devices.
|
||||
|
||||
Some PCI devices provide Single Root I/O Virtualization and Sharing (SR-IOV)
|
||||
capabilities. When SR-IOV is used, a physical device is virtualized and appears
|
||||
as multiple PCI devices. Virtual PCI devices are assigned to the same or
|
||||
different guests. In the case of PCI passthrough, the full physical device is
|
||||
assigned to only one guest and cannot be shared.
|
||||
|
||||
.. note::
|
||||
|
||||
For information on attaching virtual SR-IOV devices to guests, refer to the
|
||||
`Networking Guide`_.
|
||||
|
||||
To enable PCI passthrough, follow the steps below:
|
||||
|
||||
#. Configure nova-scheduler (Controller)
|
||||
#. Configure nova-api (Controller)**
|
||||
#. Configure a flavor (Controller)
|
||||
#. Enable PCI passthrough (Compute)
|
||||
#. Configure PCI devices in nova-compute (Compute)
|
||||
|
||||
.. note::
|
||||
|
||||
The PCI device with address ``0000:41:00.0`` is used as an example. This
|
||||
will differ between environments.
|
||||
|
||||
Configure nova-scheduler (Controller)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Configure ``nova-scheduler`` as specified in `Configure nova-scheduler`_.
|
||||
|
||||
#. Restart the ``nova-scheduler`` service.
|
||||
|
||||
Configure nova-api (Controller)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Specify the PCI alias for the device.
|
||||
|
||||
Configure a PCI alias ``a1`` to request a PCI device with a ``vendor_id`` of
|
||||
``0x8086`` and a ``product_id`` of ``0x154d``. The ``vendor_id`` and
|
||||
``product_id`` correspond the PCI device with address ``0000:41:00.0``.
|
||||
|
||||
Edit ``/etc/nova/nova.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[default]
|
||||
pci_alias = { "vendor_id":"8086", "product_id":"154d", "device_type":"type-PF", "name":"a1" }
|
||||
|
||||
For more information about the syntax of ``pci_alias``, refer to `nova.conf
|
||||
configuration options`_.
|
||||
|
||||
#. Restart the ``nova-api`` service.
|
||||
|
||||
Configure a flavor (Controller)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Configure a flavor to request two PCI devices, each with ``vendor_id`` of
|
||||
``0x8086`` and ``product_id`` of ``0x154d``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# openstack flavor set m1.large --property "pci_passthrough:alias"="a1:2"
|
||||
|
||||
For more information about the syntax for ``pci_passthrough:alias``, refer to
|
||||
`flavor`_.
|
||||
|
||||
Enable PCI passthrough (Compute)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Enable VT-d and IOMMU. For more information, refer to steps one and two in
|
||||
`Create Virtual Functions`_.
|
||||
|
||||
Configure PCI devices (Compute)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Configure ``nova-compute`` to allow the PCI device to pass through to
|
||||
VMs. Edit ``/etc/nova/nova.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[default]
|
||||
pci_passthrough_whitelist = { "address": "0000:41:00.0" }
|
||||
|
||||
Alternatively specify multiple PCI devices using whitelisting:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[default]
|
||||
pci_passthrough_whitelist = { "vendor_id": "8086", "product_id": "10fb" }
|
||||
|
||||
All PCI devices matching the ``vendor_id`` and ``product_id`` are added to
|
||||
the pool of PCI devices available for passthrough to VMs.
|
||||
|
||||
For more information about the syntax of ``pci_passthrough_whitelist``,
|
||||
refer to `nova.conf configuration options`_.
|
||||
|
||||
#. Specify the PCI alias for the device.
|
||||
|
||||
From the Newton release, to resize guest with PCI device, configure the PCI
|
||||
alias on the compute node as well.
|
||||
|
||||
Configure a PCI alias ``a1`` to request a PCI device with a ``vendor_id`` of
|
||||
``0x8086`` and a ``product_id`` of ``0x154d``. The ``vendor_id`` and
|
||||
``product_id`` correspond the PCI device with address ``0000:41:00.0``.
|
||||
|
||||
Edit ``/etc/nova/nova.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[default]
|
||||
pci_alias = { "vendor_id":"8086", "product_id":"154d", "device_type":"type-PF", "name":"a1" }
|
||||
|
||||
For more information about the syntax of ``pci_alias``, refer to `nova.conf
|
||||
configuration options`_.
|
||||
|
||||
#. Restart the ``nova-compute`` service.
|
||||
|
||||
Create instances with PCI passthrough devices
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``nova-scheduler`` selects a destination host that has PCI devices
|
||||
available with the specified ``vendor_id`` and ``product_id`` that matches the
|
||||
``pci_alias`` from the flavor.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# openstack server create --flavor m1.large --image cirros-0.3.5-x86_64-uec --wait test-pci
|
||||
|
||||
.. Links
|
||||
.. _`Create Virtual Functions`: https://docs.openstack.org/ocata/networking-guide/config-sriov.html#create-virtual-functions-compute
|
||||
.. _`Configure nova-scheduler`: https://docs.openstack.org/ocata/networking-guide/config-sriov.html#configure-nova-scheduler-controller
|
||||
.. _`nova.conf configuration options`: https://docs.openstack.org/ocata/config-reference/compute/config-options.html
|
||||
.. _`flavor`: https://docs.openstack.org/admin-guide/compute-flavors.html
|
||||
.. _`Networking Guide`: https://docs.openstack.org/ocata/networking-guide/config-sriov.html
|
@ -1,326 +0,0 @@
|
||||
===============================
|
||||
Configure remote console access
|
||||
===============================
|
||||
|
||||
To provide a remote console or remote desktop access to guest virtual
|
||||
machines, use VNC or SPICE HTML5 through either the OpenStack dashboard
|
||||
or the command line. Best practice is to select one or the other to run.
|
||||
|
||||
About nova-consoleauth
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Both client proxies leverage a shared service to manage token
|
||||
authentication called ``nova-consoleauth``. This service must be running for
|
||||
either proxy to work. Many proxies of either type can be run against a
|
||||
single ``nova-consoleauth`` service in a cluster configuration.
|
||||
|
||||
Do not confuse the ``nova-consoleauth`` shared service with
|
||||
``nova-console``, which is a XenAPI-specific service that most recent
|
||||
VNC proxy architectures do not use.
|
||||
|
||||
SPICE console
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
OpenStack Compute supports VNC consoles to guests. The VNC protocol is
|
||||
fairly limited, lacking support for multiple monitors, bi-directional
|
||||
audio, reliable cut-and-paste, video streaming and more. SPICE is a new
|
||||
protocol that aims to address the limitations in VNC and provide good
|
||||
remote desktop support.
|
||||
|
||||
SPICE support in OpenStack Compute shares a similar architecture to the
|
||||
VNC implementation. The OpenStack dashboard uses a SPICE-HTML5 widget in
|
||||
its console tab that communicates to the ``nova-spicehtml5proxy`` service by
|
||||
using SPICE-over-websockets. The ``nova-spicehtml5proxy`` service
|
||||
communicates directly with the hypervisor process by using SPICE.
|
||||
|
||||
VNC must be explicitly disabled to get access to the SPICE console. Set
|
||||
the ``vnc_enabled`` option to ``False`` in the ``[DEFAULT]`` section to
|
||||
disable the VNC console.
|
||||
|
||||
Use the following options to configure SPICE as the console for
|
||||
OpenStack Compute:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
[spice]
|
||||
agent_enabled = False
|
||||
enabled = True
|
||||
html5proxy_base_url = http://IP_ADDRESS:6082/spice_auto.html
|
||||
html5proxy_host = 0.0.0.0
|
||||
html5proxy_port = 6082
|
||||
keymap = en-us
|
||||
server_listen = 127.0.0.1
|
||||
server_proxyclient_address = 127.0.0.1
|
||||
|
||||
Replace ``IP_ADDRESS`` with the management interface IP address
|
||||
of the controller or the VIP.
|
||||
|
||||
VNC console proxy
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
The VNC proxy is an OpenStack component that enables compute service
|
||||
users to access their instances through VNC clients.
|
||||
|
||||
.. note::
|
||||
|
||||
The web proxy console URLs do not support the websocket protocol
|
||||
scheme (ws://) on python versions less than 2.7.4.
|
||||
|
||||
The VNC console connection works as follows:
|
||||
|
||||
#. A user connects to the API and gets an ``access_url`` such as,
|
||||
``http://ip:port/?token=xyz``.
|
||||
|
||||
#. The user pastes the URL in a browser or uses it as a client
|
||||
parameter.
|
||||
|
||||
#. The browser or client connects to the proxy.
|
||||
|
||||
#. The proxy talks to ``nova-consoleauth`` to authorize the token for the
|
||||
user, and maps the token to the *private* host and port of the VNC
|
||||
server for an instance.
|
||||
|
||||
The compute host specifies the address that the proxy should use to
|
||||
connect through the ``nova.conf`` file option,
|
||||
``vncserver_proxyclient_address``. In this way, the VNC proxy works
|
||||
as a bridge between the public network and private host network.
|
||||
|
||||
#. The proxy initiates the connection to VNC server and continues to
|
||||
proxy until the session ends.
|
||||
|
||||
The proxy also tunnels the VNC protocol over WebSockets so that the
|
||||
``noVNC`` client can talk to VNC servers. In general, the VNC proxy:
|
||||
|
||||
- Bridges between the public network where the clients live and the
|
||||
private network where VNC servers live.
|
||||
|
||||
- Mediates token authentication.
|
||||
|
||||
- Transparently deals with hypervisor-specific connection details to
|
||||
provide a uniform client experience.
|
||||
|
||||
.. figure:: figures/SCH_5009_V00_NUAC-VNC_OpenStack.png
|
||||
:alt: noVNC process
|
||||
:width: 95%
|
||||
|
||||
VNC configuration options
|
||||
-------------------------
|
||||
|
||||
To customize the VNC console, use the following configuration options in
|
||||
your ``nova.conf`` file:
|
||||
|
||||
.. note::
|
||||
|
||||
To support :ref:`live migration <section_configuring-compute-migrations>`,
|
||||
you cannot specify a specific IP address for ``vncserver_listen``,
|
||||
because that IP address does not exist on the destination host.
|
||||
|
||||
.. list-table:: **Description of VNC configuration options**
|
||||
:header-rows: 1
|
||||
:widths: 25 25
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``daemon = False``
|
||||
- (BoolOpt) Become a daemon (background process)
|
||||
* - ``key = None``
|
||||
- (StrOpt) SSL key file (if separate from cert)
|
||||
* - ``novncproxy_host = 0.0.0.0``
|
||||
- (StrOpt) Host on which to listen for incoming requests
|
||||
* - ``novncproxy_port = 6080``
|
||||
- (IntOpt) Port on which to listen for incoming requests
|
||||
* - ``record = False``
|
||||
- (BoolOpt) Record sessions to FILE.[session_number]
|
||||
* - ``source_is_ipv6 = False``
|
||||
- (BoolOpt) Source is ipv6
|
||||
* - ``ssl_only = False``
|
||||
- (BoolOpt) Disallow non-encrypted connections
|
||||
* - ``web = /usr/share/spice-html5``
|
||||
- (StrOpt) Run webserver on same port. Serve files from DIR.
|
||||
* - **[vmware]**
|
||||
-
|
||||
* - ``vnc_port = 5900``
|
||||
- (IntOpt) VNC starting port
|
||||
* - ``vnc_port_total = 10000``
|
||||
- vnc_port_total = 10000
|
||||
* - **[vnc]**
|
||||
-
|
||||
* - enabled = True
|
||||
- (BoolOpt) Enable VNC related features
|
||||
* - novncproxy_base_url = http://127.0.0.1:6080/vnc_auto.html
|
||||
- (StrOpt) Location of VNC console proxy, in the form
|
||||
"http://127.0.0.1:6080/vnc_auto.html"
|
||||
* - vncserver_listen = 127.0.0.1
|
||||
- (StrOpt) IP address on which instance vncservers should listen
|
||||
* - vncserver_proxyclient_address = 127.0.0.1
|
||||
- (StrOpt) The address to which proxy clients (like nova-xvpvncproxy)
|
||||
should connect
|
||||
* - xvpvncproxy_base_url = http://127.0.0.1:6081/console
|
||||
- (StrOpt) Location of nova xvp VNC console proxy, in the form
|
||||
"http://127.0.0.1:6081/console"
|
||||
|
||||
.. note::
|
||||
|
||||
- The ``vncserver_proxyclient_address`` defaults to ``127.0.0.1``,
|
||||
which is the address of the compute host that Compute instructs
|
||||
proxies to use when connecting to instance servers.
|
||||
|
||||
- For all-in-one XenServer domU deployments, set this to
|
||||
``169.254.0.1.``
|
||||
|
||||
- For multi-host XenServer domU deployments, set to a ``dom0
|
||||
management IP`` on the same network as the proxies.
|
||||
|
||||
- For multi-host libvirt deployments, set to a host management IP
|
||||
on the same network as the proxies.
|
||||
|
||||
Typical deployment
|
||||
------------------
|
||||
|
||||
A typical deployment has the following components:
|
||||
|
||||
- A ``nova-consoleauth`` process. Typically runs on the controller host.
|
||||
|
||||
- One or more ``nova-novncproxy`` services. Supports browser-based noVNC
|
||||
clients. For simple deployments, this service typically runs on the
|
||||
same machine as ``nova-api`` because it operates as a proxy between the
|
||||
public network and the private compute host network.
|
||||
|
||||
- One or more ``nova-xvpvncproxy`` services. Supports the special Java
|
||||
client discussed here. For simple deployments, this service typically
|
||||
runs on the same machine as ``nova-api`` because it acts as a proxy
|
||||
between the public network and the private compute host network.
|
||||
|
||||
- One or more compute hosts. These compute hosts must have correctly
|
||||
configured options, as follows.
|
||||
|
||||
nova-novncproxy (noVNC)
|
||||
-----------------------
|
||||
|
||||
You must install the noVNC package, which contains the ``nova-novncproxy``
|
||||
service. As root, run the following command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt-get install nova-novncproxy
|
||||
|
||||
The service starts automatically on installation.
|
||||
|
||||
To restart the service, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service nova-novncproxy restart
|
||||
|
||||
The configuration option parameter should point to your ``nova.conf``
|
||||
file, which includes the message queue server address and credentials.
|
||||
|
||||
By default, ``nova-novncproxy`` binds on ``0.0.0.0:6080``.
|
||||
|
||||
To connect the service to your Compute deployment, add the following
|
||||
configuration options to your ``nova.conf`` file:
|
||||
|
||||
- ``vncserver_listen=0.0.0.0``
|
||||
|
||||
Specifies the address on which the VNC service should bind. Make sure
|
||||
it is assigned one of the compute node interfaces. This address is
|
||||
the one used by your domain file.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
<graphics type="vnc" autoport="yes" keymap="en-us" listen="0.0.0.0"/>
|
||||
|
||||
.. note::
|
||||
|
||||
To use live migration, use the 0.0.0.0 address.
|
||||
|
||||
- ``vncserver_proxyclient_address=127.0.0.1``
|
||||
|
||||
The address of the compute host that Compute instructs proxies to use
|
||||
when connecting to instance ``vncservers``.
|
||||
|
||||
Frequently asked questions about VNC access to virtual machines
|
||||
---------------------------------------------------------------
|
||||
|
||||
- **Q: What is the difference between ``nova-xvpvncproxy`` and
|
||||
``nova-novncproxy``?**
|
||||
|
||||
A: ``nova-xvpvncproxy``, which ships with OpenStack Compute, is a
|
||||
proxy that supports a simple Java client. nova-novncproxy uses noVNC
|
||||
to provide VNC support through a web browser.
|
||||
|
||||
- **Q: I want VNC support in the OpenStack dashboard. What services do
|
||||
I need?**
|
||||
|
||||
A: You need ``nova-novncproxy``, ``nova-consoleauth``, and correctly
|
||||
configured compute hosts.
|
||||
|
||||
- **Q: When I use ``nova get-vnc-console`` or click on the VNC tab of
|
||||
the OpenStack dashboard, it hangs. Why?**
|
||||
|
||||
A: Make sure you are running ``nova-consoleauth`` (in addition to
|
||||
``nova-novncproxy``). The proxies rely on ``nova-consoleauth`` to validate
|
||||
tokens, and waits for a reply from them until a timeout is reached.
|
||||
|
||||
- **Q: My VNC proxy worked fine during my all-in-one test, but now it
|
||||
doesn't work on multi host. Why?**
|
||||
|
||||
A: The default options work for an all-in-one install, but changes
|
||||
must be made on your compute hosts once you start to build a cluster.
|
||||
As an example, suppose you have two servers:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
PROXYSERVER (public_ip=172.24.1.1, management_ip=192.168.1.1)
|
||||
COMPUTESERVER (management_ip=192.168.1.2)
|
||||
|
||||
Your ``nova-compute`` configuration file must set the following values:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# These flags help construct a connection data structure
|
||||
vncserver_proxyclient_address=192.168.1.2
|
||||
novncproxy_base_url=http://172.24.1.1:6080/vnc_auto.html
|
||||
xvpvncproxy_base_url=http://172.24.1.1:6081/console
|
||||
|
||||
# This is the address where the underlying vncserver (not the proxy)
|
||||
# will listen for connections.
|
||||
vncserver_listen=192.168.1.2
|
||||
|
||||
.. note::
|
||||
|
||||
``novncproxy_base_url`` and ``xvpvncproxy_base_url`` use a public
|
||||
IP; this is the URL that is ultimately returned to clients, which
|
||||
generally do not have access to your private network. Your
|
||||
PROXYSERVER must be able to reach ``vncserver_proxyclient_address``,
|
||||
because that is the address over which the VNC connection is proxied.
|
||||
|
||||
- **Q: My noVNC does not work with recent versions of web browsers. Why?**
|
||||
|
||||
A: Make sure you have installed ``python-numpy``, which is required
|
||||
to support a newer version of the WebSocket protocol (HyBi-07+).
|
||||
|
||||
- **Q: How do I adjust the dimensions of the VNC window image in the
|
||||
OpenStack dashboard?**
|
||||
|
||||
A: These values are hard-coded in a Django HTML template. To alter
|
||||
them, edit the ``_detail_vnc.html`` template file. The location of
|
||||
this file varies based on Linux distribution. On Ubuntu 14.04, the
|
||||
file is at
|
||||
``/usr/share/pyshared/horizon/dashboards/nova/instances/templates/instances/_detail_vnc.html``.
|
||||
|
||||
Modify the ``width`` and ``height`` options, as follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
<iframe src="{{ vnc_url }}" width="720" height="430"></iframe>
|
||||
|
||||
- **Q: My noVNC connections failed with ValidationError: Origin header
|
||||
protocol does not match. Why?**
|
||||
|
||||
A: Make sure the ``base_url`` match your TLS setting. If you are
|
||||
using https console connections, make sure that the value of
|
||||
``novncproxy_base_url`` is set explicitly where the ``nova-novncproxy``
|
||||
service is running.
|
@ -1,118 +0,0 @@
|
||||
.. _root-wrap-reference:
|
||||
|
||||
====================
|
||||
Secure with rootwrap
|
||||
====================
|
||||
|
||||
Rootwrap allows unprivileged users to safely run Compute actions as the
|
||||
root user. Compute previously used :command:`sudo` for this purpose, but this
|
||||
was difficult to maintain, and did not allow advanced filters. The
|
||||
:command:`rootwrap` command replaces :command:`sudo` for Compute.
|
||||
|
||||
To use rootwrap, prefix the Compute command with :command:`nova-rootwrap`. For
|
||||
example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo nova-rootwrap /etc/nova/rootwrap.conf command
|
||||
|
||||
A generic ``sudoers`` entry lets the Compute user run :command:`nova-rootwrap`
|
||||
as root. The :command:`nova-rootwrap` code looks for filter definition
|
||||
directories in its configuration file, and loads command filters from
|
||||
them. It then checks if the command requested by Compute matches one of
|
||||
those filters and, if so, executes the command (as root). If no filter
|
||||
matches, it denies the request.
|
||||
|
||||
.. note::
|
||||
|
||||
Be aware of issues with using NFS and root-owned files. The NFS
|
||||
share must be configured with the ``no_root_squash`` option enabled,
|
||||
in order for rootwrap to work correctly.
|
||||
|
||||
Rootwrap is fully controlled by the root user. The root user
|
||||
owns the sudoers entry which allows Compute to run a specific
|
||||
rootwrap executable as root, and only with a specific
|
||||
configuration file (which should also be owned by root).
|
||||
The :command:`nova-rootwrap` command imports the Python
|
||||
modules it needs from a cleaned, system-default PYTHONPATH.
|
||||
The root-owned configuration file points to root-owned
|
||||
filter definition directories, which contain root-owned
|
||||
filters definition files. This chain ensures that the Compute
|
||||
user itself is not in control of the configuration or modules
|
||||
used by the :command:`nova-rootwrap` executable.
|
||||
|
||||
Configure rootwrap
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Configure rootwrap in the ``rootwrap.conf`` file. Because
|
||||
it is in the trusted security path, it must be owned and writable
|
||||
by only the root user. The ``rootwrap_config=entry`` parameter
|
||||
specifies the file's location in the sudoers entry and in the
|
||||
``nova.conf`` configuration file.
|
||||
|
||||
The ``rootwrap.conf`` file uses an INI file format with these
|
||||
sections and parameters:
|
||||
|
||||
.. list-table:: **rootwrap.conf configuration options**
|
||||
:widths: 64 31
|
||||
|
||||
* - Configuration option=Default value
|
||||
- (Type) Description
|
||||
* - [DEFAULT]
|
||||
filters\_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap
|
||||
- (ListOpt) Comma-separated list of directories
|
||||
containing filter definition files.
|
||||
Defines where rootwrap filters are stored.
|
||||
Directories defined on this line should all
|
||||
exist, and be owned and writable only by the
|
||||
root user.
|
||||
|
||||
If the root wrapper is not performing correctly, you can add a
|
||||
workaround option into the ``nova.conf`` configuration file. This
|
||||
workaround re-configures the root wrapper configuration to fall back to
|
||||
running commands as ``sudo``, and is a Kilo release feature.
|
||||
|
||||
Including this workaround in your configuration file safeguards your
|
||||
environment from issues that can impair root wrapper performance. Tool
|
||||
changes that have impacted
|
||||
`Python Build Reasonableness (PBR) <https://git.openstack.org/cgit/openstack-dev/pbr/>`__
|
||||
for example, are a known issue that affects root wrapper performance.
|
||||
|
||||
To set up this workaround, configure the ``disable_rootwrap`` option in
|
||||
the ``[workaround]`` section of the ``nova.conf`` configuration file.
|
||||
|
||||
The filters definition files contain lists of filters that rootwrap will
|
||||
use to allow or deny a specific command. They are generally suffixed by
|
||||
``.filters`` . Since they are in the trusted security path, they need to
|
||||
be owned and writable only by the root user. Their location is specified
|
||||
in the ``rootwrap.conf`` file.
|
||||
|
||||
Filter definition files use an INI file format with a ``[Filters]``
|
||||
section and several lines, each with a unique parameter name, which
|
||||
should be different for each filter you define:
|
||||
|
||||
.. list-table:: **Filters configuration options**
|
||||
:widths: 72 39
|
||||
|
||||
|
||||
* - Configuration option=Default value
|
||||
- (Type) Description
|
||||
* - [Filters]
|
||||
filter\_name=kpartx: CommandFilter, /sbin/kpartx, root
|
||||
- (ListOpt) Comma-separated list containing the filter class to
|
||||
use, followed by the Filter arguments (which vary depending
|
||||
on the Filter class selected).
|
||||
|
||||
Configure the rootwrap daemon
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Administrators can use rootwrap daemon support instead of running
|
||||
rootwrap with :command:`sudo`. The rootwrap daemon reduces the
|
||||
overhead and performance loss that results from running
|
||||
``oslo.rootwrap`` with :command:`sudo`. Each call that needs rootwrap
|
||||
privileges requires a new instance of rootwrap. The daemon
|
||||
prevents overhead from the repeated calls. The daemon does not support
|
||||
long running processes, however.
|
||||
|
||||
To enable the rootwrap daemon, set ``use_rootwrap_daemon`` to ``True``
|
||||
in the Compute service configuration file.
|
@ -1,175 +0,0 @@
|
||||
.. _section-compute-security:
|
||||
|
||||
==================
|
||||
Security hardening
|
||||
==================
|
||||
|
||||
OpenStack Compute can be integrated with various third-party
|
||||
technologies to increase security. For more information, see the
|
||||
`OpenStack Security Guide <https://docs.openstack.org/security-guide/>`_.
|
||||
|
||||
Trusted compute pools
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Administrators can designate a group of compute hosts as trusted using
|
||||
trusted compute pools. The trusted hosts use hardware-based security
|
||||
features, such as the Intel Trusted Execution Technology (TXT), to
|
||||
provide an additional level of security. Combined with an external
|
||||
stand-alone, web-based remote attestation server, cloud providers can
|
||||
ensure that the compute node runs only software with verified
|
||||
measurements and can ensure a secure cloud stack.
|
||||
|
||||
Trusted compute pools provide the ability for cloud subscribers to
|
||||
request services run only on verified compute nodes.
|
||||
|
||||
The remote attestation server performs node verification like this:
|
||||
|
||||
1. Compute nodes boot with Intel TXT technology enabled.
|
||||
|
||||
2. The compute node BIOS, hypervisor, and operating system are measured.
|
||||
|
||||
3. When the attestation server challenges the compute node, the measured
|
||||
data is sent to the attestation server.
|
||||
|
||||
4. The attestation server verifies the measurements against a known good
|
||||
database to determine node trustworthiness.
|
||||
|
||||
A description of how to set up an attestation service is beyond the
|
||||
scope of this document. For an open source project that you can use to
|
||||
implement an attestation service, see the `Open
|
||||
Attestation <https://github.com/OpenAttestation/OpenAttestation>`__
|
||||
project.
|
||||
|
||||
|
||||
.. figure:: figures/OpenStackTrustedComputePool1.png
|
||||
|
||||
**Configuring Compute to use trusted compute pools**
|
||||
|
||||
#. Enable scheduling support for trusted compute pools by adding these
|
||||
lines to the ``DEFAULT`` section of the ``/etc/nova/nova.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
|
||||
scheduler_available_filters=nova.scheduler.filters.all_filters
|
||||
scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter,TrustedFilter
|
||||
|
||||
#. Specify the connection information for your attestation service by
|
||||
adding these lines to the ``trusted_computing`` section of the
|
||||
``/etc/nova/nova.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[trusted_computing]
|
||||
attestation_server = 10.1.71.206
|
||||
attestation_port = 8443
|
||||
# If using OAT v2.0 after, use this port:
|
||||
# attestation_port = 8181
|
||||
attestation_server_ca_file = /etc/nova/ssl.10.1.71.206.crt
|
||||
# If using OAT v1.5, use this api_url:
|
||||
attestation_api_url = /AttestationService/resources
|
||||
# If using OAT pre-v1.5, use this api_url:
|
||||
# attestation_api_url = /OpenAttestationWebServices/V1.0
|
||||
attestation_auth_blob = i-am-openstack
|
||||
|
||||
In this example:
|
||||
|
||||
server
|
||||
Host name or IP address of the host that runs the attestation
|
||||
service
|
||||
|
||||
port
|
||||
HTTPS port for the attestation service
|
||||
|
||||
server_ca_file
|
||||
Certificate file used to verify the attestation server's identity
|
||||
|
||||
api_url
|
||||
The attestation service's URL path
|
||||
|
||||
auth_blob
|
||||
An authentication blob, required by the attestation service.
|
||||
|
||||
#. Save the file, and restart the ``nova-compute`` and ``nova-scheduler``
|
||||
service to pick up the changes.
|
||||
|
||||
To customize the trusted compute pools, use these configuration option
|
||||
settings:
|
||||
|
||||
.. list-table:: **Description of trusted computing configuration options**
|
||||
:header-rows: 2
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - [trusted_computing]
|
||||
-
|
||||
* - attestation_api_url = /OpenAttestationWebServices/V1.0
|
||||
- (StrOpt) Attestation web API URL
|
||||
* - attestation_auth_blob = None
|
||||
- (StrOpt) Attestation authorization blob - must change
|
||||
* - attestation_auth_timeout = 60
|
||||
- (IntOpt) Attestation status cache valid period length
|
||||
* - attestation_insecure_ssl = False
|
||||
- (BoolOpt) Disable SSL cert verification for Attestation service
|
||||
* - attestation_port = 8443
|
||||
- (StrOpt) Attestation server port
|
||||
* - attestation_server = None
|
||||
- (StrOpt) Attestation server HTTP
|
||||
* - attestation_server_ca_file = None
|
||||
- (StrOpt) Attestation server Cert file for Identity verification
|
||||
|
||||
**Specifying trusted flavors**
|
||||
|
||||
#. Flavors can be designated as trusted using the
|
||||
:command:`openstack flavor set` command. In this example, the
|
||||
``m1.tiny`` flavor is being set as trusted:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set --property trusted_host=trusted m1.tiny
|
||||
|
||||
#. You can request that your instance is run on a trusted host by
|
||||
specifying a trusted flavor when booting the instance:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server create --flavor m1.tiny \
|
||||
--key-name myKeypairName --image myImageID newInstanceName
|
||||
|
||||
|
||||
.. figure:: figures/OpenStackTrustedComputePool2.png
|
||||
|
||||
|
||||
Encrypt Compute metadata traffic
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
**Enabling SSL encryption**
|
||||
|
||||
OpenStack supports encrypting Compute metadata traffic with HTTPS.
|
||||
Enable SSL encryption in the ``metadata_agent.ini`` file.
|
||||
|
||||
#. Enable the HTTPS protocol.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
nova_metadata_protocol = https
|
||||
|
||||
#. Determine whether insecure SSL connections are accepted for Compute
|
||||
metadata server requests. The default value is ``False``.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
nova_metadata_insecure = False
|
||||
|
||||
#. Specify the path to the client certificate.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
nova_client_cert = PATH_TO_CERT
|
||||
|
||||
#. Specify the path to the private key.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
nova_client_priv_key = PATH_TO_KEY
|
@ -1,71 +0,0 @@
|
||||
.. _configuring-compute-service-groups:
|
||||
|
||||
================================
|
||||
Configure Compute service groups
|
||||
================================
|
||||
|
||||
The Compute service must know the status of each compute node to
|
||||
effectively manage and use them. This can include events like a user
|
||||
launching a new VM, the scheduler sending a request to a live node, or a
|
||||
query to the ServiceGroup API to determine if a node is live.
|
||||
|
||||
When a compute worker running the nova-compute daemon starts, it calls
|
||||
the join API to join the compute group. Any service (such as the
|
||||
scheduler) can query the group's membership and the status of its nodes.
|
||||
Internally, the ServiceGroup client driver automatically updates the
|
||||
compute worker status.
|
||||
|
||||
.. _database-servicegroup-driver:
|
||||
|
||||
Database ServiceGroup driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
By default, Compute uses the database driver to track if a node is live.
|
||||
In a compute worker, this driver periodically sends a ``db update``
|
||||
command to the database, saying “I'm OK” with a timestamp. Compute uses
|
||||
a pre-defined timeout (``service_down_time``) to determine if a node is
|
||||
dead.
|
||||
|
||||
The driver has limitations, which can be problematic depending on your
|
||||
environment. If a lot of compute worker nodes need to be checked, the
|
||||
database can be put under heavy load, which can cause the timeout to
|
||||
trigger, and a live node could incorrectly be considered dead. By
|
||||
default, the timeout is 60 seconds. Reducing the timeout value can help
|
||||
in this situation, but you must also make the database update more
|
||||
frequently, which again increases the database workload.
|
||||
|
||||
The database contains data that is both transient (such as whether the
|
||||
node is alive) and persistent (such as entries for VM owners). With the
|
||||
ServiceGroup abstraction, Compute can treat each type separately.
|
||||
|
||||
.. _memcache-servicegroup-driver:
|
||||
|
||||
Memcache ServiceGroup driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The memcache ServiceGroup driver uses memcached, a distributed memory
|
||||
object caching system that is used to increase site performance. For
|
||||
more details, see `memcached.org <http://memcached.org/>`_.
|
||||
|
||||
To use the memcache driver, you must install memcached. You might
|
||||
already have it installed, as the same driver is also used for the
|
||||
OpenStack Object Storage and OpenStack dashboard. To install
|
||||
memcached, see the *Environment -> Memcached* section in the
|
||||
`Installation Tutorials and Guides <https://docs.openstack.org/project-install-guide/ocata>`_
|
||||
depending on your distribution.
|
||||
|
||||
These values in the ``/etc/nova/nova.conf`` file are required on every
|
||||
node for the memcache driver:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
# Driver for the ServiceGroup service
|
||||
servicegroup_driver = "mc"
|
||||
|
||||
# Memcached servers. Use either a list of memcached servers to use for caching (list value),
|
||||
# or "<None>" for in-process caching (default).
|
||||
memcached_servers = <None>
|
||||
|
||||
# Timeout; maximum time since last check-in for up service (integer value).
|
||||
# Helps to define whether a node is dead
|
||||
service_down_time = 60
|
@ -1,88 +0,0 @@
|
||||
.. _compute-trusted-pools.rst:
|
||||
|
||||
=====================
|
||||
System administration
|
||||
=====================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
compute-manage-users.rst
|
||||
compute-manage-volumes.rst
|
||||
compute-flavors.rst
|
||||
compute-default-ports.rst
|
||||
compute-admin-password-injection.rst
|
||||
compute-manage-the-cloud.rst
|
||||
compute-manage-logs.rst
|
||||
compute-root-wrap-reference.rst
|
||||
compute-configuring-migrations.rst
|
||||
compute-live-migration-usage.rst
|
||||
compute-remote-console-access.rst
|
||||
compute-service-groups.rst
|
||||
compute-security.rst
|
||||
compute-node-down.rst
|
||||
compute-adv-config.rst
|
||||
|
||||
To effectively administer compute, you must understand how the different
|
||||
installed nodes interact with each other. Compute can be installed in
|
||||
many different ways using multiple servers, but generally multiple
|
||||
compute nodes control the virtual servers and a cloud controller node
|
||||
contains the remaining Compute services.
|
||||
|
||||
The Compute cloud works using a series of daemon processes named ``nova-*``
|
||||
that exist persistently on the host machine. These binaries can all run
|
||||
on the same machine or be spread out on multiple boxes in a large
|
||||
deployment. The responsibilities of services and drivers are:
|
||||
|
||||
**Services**
|
||||
|
||||
``nova-api``
|
||||
receives XML requests and sends them to the rest of the
|
||||
system. A WSGI app routes and authenticates requests. Supports the
|
||||
EC2 and OpenStack APIs. A ``nova.conf`` configuration file is created
|
||||
when Compute is installed.
|
||||
|
||||
``nova-cert``
|
||||
manages certificates.
|
||||
|
||||
``nova-compute``
|
||||
manages virtual machines. Loads a Service object, and
|
||||
exposes the public methods on ComputeManager through a Remote
|
||||
Procedure Call (RPC).
|
||||
|
||||
``nova-conductor``
|
||||
provides database-access support for compute nodes
|
||||
(thereby reducing security risks).
|
||||
|
||||
``nova-consoleauth``
|
||||
manages console authentication.
|
||||
|
||||
``nova-objectstore``
|
||||
a simple file-based storage system for images that
|
||||
replicates most of the S3 API. It can be replaced with OpenStack
|
||||
Image service and either a simple image manager or OpenStack Object
|
||||
Storage as the virtual machine image storage facility. It must exist
|
||||
on the same node as ``nova-compute``.
|
||||
|
||||
``nova-network``
|
||||
manages floating and fixed IPs, DHCP, bridging and
|
||||
VLANs. Loads a Service object which exposes the public methods on one
|
||||
of the subclasses of NetworkManager. Different networking strategies
|
||||
are available by changing the ``network_manager`` configuration
|
||||
option to ``FlatManager``, ``FlatDHCPManager``, or ``VLANManager``
|
||||
(defaults to ``VLANManager`` if nothing is specified).
|
||||
|
||||
``nova-scheduler``
|
||||
dispatches requests for new virtual machines to the
|
||||
correct node.
|
||||
|
||||
``nova-novncproxy``
|
||||
provides a VNC proxy for browsers, allowing VNC
|
||||
consoles to access virtual machines.
|
||||
|
||||
.. note::
|
||||
|
||||
Some services have drivers that change how the service implements
|
||||
its core functionality. For example, the ``nova-compute`` service
|
||||
supports drivers that let you choose which hypervisor type it can
|
||||
use. ``nova-network`` and ``nova-scheduler`` also have drivers.
|
@ -1,25 +0,0 @@
|
||||
=======
|
||||
Compute
|
||||
=======
|
||||
|
||||
The OpenStack Compute service allows you to control an
|
||||
:term:`Infrastructure-as-a-Service (IaaS)` cloud computing platform.
|
||||
It gives you control over instances and networks, and allows you to manage
|
||||
access to the cloud through users and projects.
|
||||
|
||||
Compute does not include virtualization software. Instead, it defines
|
||||
drivers that interact with underlying virtualization mechanisms that run
|
||||
on your host operating system, and exposes functionality over a
|
||||
web-based API.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
compute-arch.rst
|
||||
compute-networking-nova.rst
|
||||
compute-system-admin.rst
|
||||
support-compute.rst
|
||||
|
||||
.. TODO (bmoss)
|
||||
../common/section-compute-configure-console.xml
|
||||
|
@ -79,7 +79,10 @@ release = '15.0.0'
|
||||
|
||||
# List of patterns, relative to source directory, that match files and
|
||||
# directories to ignore when looking for source files.
|
||||
exclude_patterns = ['common/appendix.rst'
|
||||
exclude_patterns = [
|
||||
'common/appendix.rst',
|
||||
'common/cli-*.rst',
|
||||
'common/nova-show-usage-statistics-for-hosts-instances.rst',
|
||||
]
|
||||
|
||||
# The reST default role (used for this markup: `text`) to use for all
|
||||
|
@ -1,59 +0,0 @@
|
||||
=======================
|
||||
Create and manage roles
|
||||
=======================
|
||||
|
||||
A role is a personality that a user assumes to perform a specific set
|
||||
of operations. A role includes a set of rights and privileges. A user
|
||||
assumes that role inherits those rights and privileges.
|
||||
|
||||
.. note::
|
||||
|
||||
OpenStack Identity service defines a user's role on a
|
||||
project, but it is completely up to the individual service
|
||||
to define what that role means. This is referred to as the
|
||||
service's policy. To get details about what the privileges
|
||||
for each role are, refer to the ``policy.json`` file
|
||||
available for each service in the
|
||||
``/etc/SERVICE/policy.json`` file. For example, the
|
||||
policy defined for OpenStack Identity service is defined
|
||||
in the ``/etc/keystone/policy.json`` file.
|
||||
|
||||
Create a role
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
#. Log in to the dashboard and select the :guilabel:`admin` project from the
|
||||
drop-down list.
|
||||
#. On the :guilabel:`Identity` tab, click the :guilabel:`Roles` category.
|
||||
#. Click the :guilabel:`Create Role` button.
|
||||
|
||||
In the :guilabel:`Create Role` window, enter a name for the role.
|
||||
#. Click the :guilabel:`Create Role` button to confirm your changes.
|
||||
|
||||
Edit a role
|
||||
~~~~~~~~~~~
|
||||
|
||||
#. Log in to the dashboard and select the :guilabel:`Identity` project from the
|
||||
drop-down list.
|
||||
#. On the :guilabel:`Identity` tab, click the :guilabel:`Roles` category.
|
||||
#. Click the :guilabel:`Edit` button.
|
||||
|
||||
In the :guilabel:`Update Role` window, enter a new name for the role.
|
||||
#. Click the :guilabel:`Update Role` button to confirm your changes.
|
||||
|
||||
.. note::
|
||||
|
||||
Using the dashboard, you can edit only the name assigned to
|
||||
a role.
|
||||
|
||||
Delete a role
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
#. Log in to the dashboard and select the :guilabel:`Identity` project from the
|
||||
drop-down list.
|
||||
#. On the :guilabel:`Identity` tab, click the :guilabel:`Roles` category.
|
||||
#. Select the role you want to delete and click the :guilabel:`Delete
|
||||
Roles` button.
|
||||
#. In the :guilabel:`Confirm Delete Roles` window, click :guilabel:`Delete
|
||||
Roles` to confirm the deletion.
|
||||
|
||||
You cannot undo this action.
|
@ -1,34 +0,0 @@
|
||||
============================================
|
||||
Launch and manage stacks using the Dashboard
|
||||
============================================
|
||||
|
||||
The Orchestration service provides a template-based
|
||||
orchestration engine for the OpenStack cloud. Orchestration
|
||||
services create and manage cloud infrastructure
|
||||
resources such as storage, networking, instances, and
|
||||
applications as a repeatable running environment.
|
||||
|
||||
Administrators use templates to create stacks, which are
|
||||
collections of resources. For example, a stack might
|
||||
include instances, floating IPs, volumes,
|
||||
security groups, or users. The Orchestration service
|
||||
offers access to all OpenStack
|
||||
core services via a single modular template, with additional
|
||||
orchestration capabilities such as auto-scaling and basic
|
||||
high availability.
|
||||
|
||||
For information about:
|
||||
|
||||
* administrative tasks on the command-line, see
|
||||
the `OpenStack Administrator Guide
|
||||
<https://docs.openstack.org/admin-guide/cli-admin-manage-stacks.html>`__.
|
||||
|
||||
.. note::
|
||||
|
||||
There are no administration-specific tasks that can be done through
|
||||
the Dashboard.
|
||||
|
||||
* the basic creation and deletion of Orchestration stacks, refer to
|
||||
the `OpenStack End User Guide
|
||||
<https://docs.openstack.org/user-guide/dashboard-stacks.html>`__.
|
||||
|
@ -1,450 +0,0 @@
|
||||
=====================================
|
||||
Customize and configure the Dashboard
|
||||
=====================================
|
||||
|
||||
Once you have the Dashboard installed, you can customize the way
|
||||
it looks and feels to suit the needs of your environment, your
|
||||
project, or your business.
|
||||
|
||||
You can also configure the Dashboard for a secure HTTPS deployment, or
|
||||
an HTTP deployment. The standard OpenStack installation uses a non-encrypted
|
||||
HTTP channel, but you can enable SSL support for the Dashboard.
|
||||
|
||||
For information on configuring HTTPS or HTTP, see :ref:`configure_dashboard`.
|
||||
|
||||
.. This content is out of date as of the Mitaka release, and needs an
|
||||
.. update to reflect the most recent work on themeing - JR -.
|
||||
|
||||
Customize the Dashboard
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The OpenStack Dashboard on Ubuntu installs the
|
||||
``openstack-dashboard-ubuntu-theme`` package by default. If you do not
|
||||
want to use this theme, remove it and its dependencies:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt-get remove --auto-remove openstack-dashboard-ubuntu-theme
|
||||
|
||||
.. note::
|
||||
|
||||
This guide focuses on the ``local_settings.py`` file.
|
||||
|
||||
The following Dashboard content can be customized to suit your needs:
|
||||
|
||||
* Logo
|
||||
* Site colors
|
||||
* HTML title
|
||||
* Logo link
|
||||
* Help URL
|
||||
|
||||
Logo and site colors
|
||||
--------------------
|
||||
|
||||
#. Create two PNG logo files with transparent backgrounds using
|
||||
the following sizes:
|
||||
|
||||
- Login screen: 365 x 50
|
||||
- Logged in banner: 216 x 35
|
||||
|
||||
#. Upload your new images to
|
||||
``/usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/img/``.
|
||||
|
||||
#. Create a CSS style sheet in
|
||||
``/usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/scss/``.
|
||||
|
||||
#. Change the colors and image file names as appropriate. Ensure the
|
||||
relative directory paths are the same. The following example file
|
||||
shows you how to customize your CSS file:
|
||||
|
||||
.. code-block:: css
|
||||
|
||||
/*
|
||||
* New theme colors for dashboard that override the defaults:
|
||||
* dark blue: #355796 / rgb(53, 87, 150)
|
||||
* light blue: #BAD3E1 / rgb(186, 211, 225)
|
||||
*
|
||||
* By Preston Lee <plee@tgen.org>
|
||||
*/
|
||||
h1.brand {
|
||||
background: #355796 repeat-x top left;
|
||||
border-bottom: 2px solid #BAD3E1;
|
||||
}
|
||||
h1.brand a {
|
||||
background: url(../img/my_cloud_logo_small.png) top left no-repeat;
|
||||
}
|
||||
#splash .login {
|
||||
background: #355796 url(../img/my_cloud_logo_medium.png) no-repeat center 35px;
|
||||
}
|
||||
#splash .login .modal-header {
|
||||
border-top: 1px solid #BAD3E1;
|
||||
}
|
||||
.btn-primary {
|
||||
background-image: none !important;
|
||||
background-color: #355796 !important;
|
||||
border: none !important;
|
||||
box-shadow: none;
|
||||
}
|
||||
.btn-primary:hover,
|
||||
.btn-primary:active {
|
||||
border: none;
|
||||
box-shadow: none;
|
||||
background-color: #BAD3E1 !important;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
#. Open the following HTML template in an editor of your choice:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
/usr/share/openstack-dashboard/openstack_dashboard/templates/_stylesheets.html
|
||||
|
||||
#. Add a line to include your newly created style sheet. For example,
|
||||
``custom.css`` file:
|
||||
|
||||
.. code-block:: html
|
||||
|
||||
<link href='{{ STATIC_URL }}bootstrap/css/bootstrap.min.css' media='screen' rel='stylesheet' />
|
||||
<link href='{{ STATIC_URL }}dashboard/css/{% choose_css %}' media='screen' rel='stylesheet' />
|
||||
<link href='{{ STATIC_URL }}dashboard/css/custom.css' media='screen' rel='stylesheet' />
|
||||
|
||||
#. Restart the Apache service.
|
||||
|
||||
#. To view your changes, reload your Dashboard. If necessary, go back
|
||||
and modify your CSS file as appropriate.
|
||||
|
||||
HTML title
|
||||
----------
|
||||
|
||||
#. Set the HTML title, which appears at the top of the browser window, by
|
||||
adding the following line to ``local_settings.py``:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
SITE_BRANDING = "Example, Inc. Cloud"
|
||||
|
||||
#. Restart Apache for this change to take effect.
|
||||
|
||||
Logo link
|
||||
---------
|
||||
|
||||
#. The logo also acts as a hyperlink. The default behavior is to redirect
|
||||
to ``horizon:user_home``. To change this, add the following attribute to
|
||||
``local_settings.py``:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
SITE_BRANDING_LINK = "http://example.com"
|
||||
|
||||
#. Restart Apache for this change to take effect.
|
||||
|
||||
Help URL
|
||||
--------
|
||||
|
||||
#. By default, the help URL points to https://docs.openstack.org. To change
|
||||
this, edit the following attribute in ``local_settings.py``:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
HORIZON_CONFIG["help_url"] = "http://openstack.mycompany.org"
|
||||
|
||||
#. Restart Apache for this change to take effect.
|
||||
|
||||
.. _configure_dashboard:
|
||||
|
||||
Configure the Dashboard
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following section on configuring the Dashboard for a
|
||||
secure HTTPS deployment, or a HTTP deployment, uses concrete
|
||||
examples to ensure the procedure is clear. The file path varies
|
||||
by distribution, however. If needed, you can also configure
|
||||
the VNC window size in the Dashboard.
|
||||
|
||||
Configure the Dashboard for HTTP
|
||||
--------------------------------
|
||||
|
||||
You can configure the Dashboard for a simple HTTP deployment.
|
||||
The standard installation uses a non-encrypted HTTP channel.
|
||||
|
||||
#. Specify the host for your Identity service endpoint in the
|
||||
``local_settings.py`` file with the ``OPENSTACK_HOST`` setting.
|
||||
|
||||
The following example shows this setting:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import os
|
||||
|
||||
from django.utils.translation import ugettext_lazy as _
|
||||
|
||||
DEBUG = False
|
||||
TEMPLATE_DEBUG = DEBUG
|
||||
PROD = True
|
||||
USE_SSL = False
|
||||
|
||||
SITE_BRANDING = 'OpenStack Dashboard'
|
||||
|
||||
# Ubuntu-specific: Enables an extra panel in the 'Settings' section
|
||||
# that easily generates a Juju environments.yaml for download,
|
||||
# preconfigured with endpoints and credentials required for bootstrap
|
||||
# and service deployment.
|
||||
ENABLE_JUJU_PANEL = True
|
||||
|
||||
# Note: You should change this value
|
||||
SECRET_KEY = 'elj1IWiLoWHgryYxFT6j7cM5fGOOxWY0'
|
||||
|
||||
# Specify a regular expression to validate user passwords.
|
||||
# HORIZON_CONFIG = {
|
||||
# "password_validator": {
|
||||
# "regex": '.*',
|
||||
# "help_text": _("Your password does not meet the requirements.")
|
||||
# }
|
||||
# }
|
||||
|
||||
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
CACHES = {
|
||||
'default': {
|
||||
'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
|
||||
'LOCATION' : '127.0.0.1:11211'
|
||||
}
|
||||
}
|
||||
|
||||
# Send email to the console by default
|
||||
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
|
||||
# Or send them to /dev/null
|
||||
#EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'
|
||||
|
||||
# Configure these for your outgoing email host
|
||||
# EMAIL_HOST = 'smtp.my-company.com'
|
||||
# EMAIL_PORT = 25
|
||||
# EMAIL_HOST_USER = 'djangomail'
|
||||
# EMAIL_HOST_PASSWORD = 'top-secret!'
|
||||
|
||||
# For multiple regions uncomment this configuration, and add (endpoint, title).
|
||||
# AVAILABLE_REGIONS = [
|
||||
# ('http://cluster1.example.com:5000/v2.0', 'cluster1'),
|
||||
# ('http://cluster2.example.com:5000/v2.0', 'cluster2'),
|
||||
# ]
|
||||
|
||||
OPENSTACK_HOST = "127.0.0.1"
|
||||
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST
|
||||
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member"
|
||||
|
||||
# The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify the
|
||||
# capabilities of the auth backend for Keystone.
|
||||
# If Keystone has been configured to use LDAP as the auth backend then set
|
||||
# can_edit_user to False and name to 'ldap'.
|
||||
#
|
||||
# TODO(tres): Remove these once Keystone has an API to identify auth backend.
|
||||
OPENSTACK_KEYSTONE_BACKEND = {
|
||||
'name': 'native',
|
||||
'can_edit_user': True
|
||||
}
|
||||
|
||||
# OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints
|
||||
# in the Keystone service catalog. Use this setting when Horizon is running
|
||||
# external to the OpenStack environment. The default is 'internalURL'.
|
||||
#OPENSTACK_ENDPOINT_TYPE = "publicURL"
|
||||
|
||||
# The number of Swift containers and objects to display on a single page before
|
||||
# providing a paging element (a "more" link) to paginate results.
|
||||
API_RESULT_LIMIT = 1000
|
||||
|
||||
# If you have external monitoring links, eg:
|
||||
# EXTERNAL_MONITORING = [
|
||||
# ['Nagios','http://foo.com'],
|
||||
# ['Ganglia','http://bar.com'],
|
||||
# ]
|
||||
|
||||
LOGGING = {
|
||||
'version': 1,
|
||||
# When set to True this will disable all logging except
|
||||
# for loggers specified in this configuration dictionary. Note that
|
||||
# if nothing is specified here and disable_existing_loggers is True,
|
||||
# django.db.backends will still log unless it is disabled explicitly.
|
||||
'disable_existing_loggers': False,
|
||||
'handlers': {
|
||||
'null': {
|
||||
'level': 'DEBUG',
|
||||
'class': 'django.utils.log.NullHandler',
|
||||
},
|
||||
'console': {
|
||||
# Set the level to "DEBUG" for verbose output logging.
|
||||
'level': 'INFO',
|
||||
'class': 'logging.StreamHandler',
|
||||
},
|
||||
},
|
||||
'loggers': {
|
||||
# Logging from django.db.backends is VERY verbose, send to null
|
||||
# by default.
|
||||
'django.db.backends': {
|
||||
'handlers': ['null'],
|
||||
'propagate': False,
|
||||
},
|
||||
'horizon': {
|
||||
'handlers': ['console'],
|
||||
'propagate': False,
|
||||
},
|
||||
'novaclient': {
|
||||
'handlers': ['console'],
|
||||
'propagate': False,
|
||||
},
|
||||
'keystoneclient': {
|
||||
'handlers': ['console'],
|
||||
'propagate': False,
|
||||
},
|
||||
'nose.plugins.manager': {
|
||||
'handlers': ['console'],
|
||||
'propagate': False,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
The service catalog configuration in the Identity service determines
|
||||
whether a service appears in the Dashboard.
|
||||
For the full listing, see `Horizon Settings and Configuration
|
||||
<https://docs.openstack.org/developer/horizon/topics/settings.html>`_.
|
||||
|
||||
#. Restart the Apache HTTP Server.
|
||||
|
||||
#. Restart ``memcached``.
|
||||
|
||||
Configure the Dashboard for HTTPS
|
||||
---------------------------------
|
||||
|
||||
You can configure the Dashboard for a secured HTTPS deployment.
|
||||
While the standard installation uses a non-encrypted HTTP channel,
|
||||
you can enable SSL support for the Dashboard.
|
||||
|
||||
This example uses the ``http://openstack.example.com`` domain.
|
||||
Use a domain that fits your current setup.
|
||||
|
||||
#. In the ``local_settings.py`` file, update the following options:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
USE_SSL = True
|
||||
CSRF_COOKIE_SECURE = True
|
||||
SESSION_COOKIE_SECURE = True
|
||||
SESSION_COOKIE_HTTPONLY = True
|
||||
|
||||
To enable HTTPS, the ``USE_SSL = True`` option is required.
|
||||
|
||||
The other options require that HTTPS is enabled;
|
||||
these options defend against cross-site scripting.
|
||||
|
||||
#. Edit the ``openstack-dashboard.conf`` file as shown in the
|
||||
**Example After**:
|
||||
|
||||
**Example Before**
|
||||
|
||||
.. code-block:: apacheconf
|
||||
|
||||
WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
|
||||
WSGIDaemonProcess horizon user=www-data group=www-data processes=3 threads=10
|
||||
Alias /static /usr/share/openstack-dashboard/openstack_dashboard/static/
|
||||
<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
|
||||
# For Apache http server 2.2 and earlier:
|
||||
Order allow,deny
|
||||
Allow from all
|
||||
|
||||
# For Apache http server 2.4 and later:
|
||||
# Require all granted
|
||||
</Directory>
|
||||
|
||||
**Example After**
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
<VirtualHost *:80>
|
||||
ServerName openstack.example.com
|
||||
<IfModule mod_rewrite.c>
|
||||
RewriteEngine On
|
||||
RewriteCond %{HTTPS} off
|
||||
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
|
||||
</IfModule>
|
||||
<IfModule !mod_rewrite.c>
|
||||
RedirectPermanent / https://openstack.example.com
|
||||
</IfModule>
|
||||
</VirtualHost>
|
||||
<VirtualHost *:443>
|
||||
ServerName openstack.example.com
|
||||
|
||||
SSLEngine On
|
||||
# Remember to replace certificates and keys with valid paths in your environment
|
||||
SSLCertificateFile /etc/apache2/SSL/openstack.example.com.crt
|
||||
SSLCACertificateFile /etc/apache2/SSL/openstack.example.com.crt
|
||||
SSLCertificateKeyFile /etc/apache2/SSL/openstack.example.com.key
|
||||
SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
|
||||
|
||||
# HTTP Strict Transport Security (HSTS) enforces that all communications
|
||||
# with a server go over SSL. This mitigates the threat from attacks such
|
||||
# as SSL-Strip which replaces links on the wire, stripping away https prefixes
|
||||
# and potentially allowing an attacker to view confidential information on the
|
||||
# wire
|
||||
Header add Strict-Transport-Security "max-age=15768000"
|
||||
|
||||
WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
|
||||
WSGIDaemonProcess horizon user=www-data group=www-data processes=3 threads=10
|
||||
Alias /static /usr/share/openstack-dashboard/openstack_dashboard/static/
|
||||
<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
|
||||
# For Apache http server 2.2 and earlier:
|
||||
<ifVersion <2.4>
|
||||
Order allow,deny
|
||||
Allow from all
|
||||
</ifVersion>
|
||||
# For Apache http server 2.4 and later:
|
||||
<ifVersion >=2.4>
|
||||
#The following two lines have been added by bms for error "AH01630: client denied
|
||||
#by server configuration:
|
||||
#/usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/cssa"
|
||||
Options All
|
||||
AllowOverride All
|
||||
Require all granted
|
||||
</ifVersion>
|
||||
</Directory>
|
||||
<Directory /usr/share/openstack-dashboard/static>
|
||||
<ifVersion >=2.4>
|
||||
Options All
|
||||
AllowOverride All
|
||||
Require all granted
|
||||
</ifVersion>
|
||||
</Directory>
|
||||
</VirtualHost>
|
||||
|
||||
In this configuration, the Apache HTTP Server listens on port 443 and
|
||||
redirects all non-secure requests to the HTTPS protocol. The secured
|
||||
section defines the private key, public key, and certificate to use.
|
||||
|
||||
#. Restart the Apache HTTP Server.
|
||||
|
||||
#. Restart ``memcached``.
|
||||
|
||||
If you try to access the Dashboard through HTTP, the browser redirects
|
||||
you to the HTTPS page.
|
||||
|
||||
.. note::
|
||||
|
||||
Configuring the Dashboard for HTTPS also requires enabling SSL for
|
||||
the noVNC proxy service. On the controller node, add the following
|
||||
additional options to the ``[DEFAULT]`` section of the
|
||||
``/etc/nova/nova.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
ssl_only = true
|
||||
cert = /etc/apache2/SSL/openstack.example.com.crt
|
||||
key = /etc/apache2/SSL/openstack.example.com.key
|
||||
|
||||
On the compute nodes, ensure the ``nonvncproxy_base_url`` option
|
||||
points to a URL with an HTTPS scheme:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
novncproxy_base_url = https://controller:6080/vnc_auto.html
|
@ -1,167 +0,0 @@
|
||||
==============
|
||||
Manage flavors
|
||||
==============
|
||||
|
||||
In OpenStack, a flavor defines the compute, memory, and storage
|
||||
capacity of a virtual server, also known as an instance. As an
|
||||
administrative user, you can create, edit, and delete flavors.
|
||||
|
||||
As of Newton, there are no default flavors. The following table
|
||||
lists the default flavors for Mitaka and earlier.
|
||||
|
||||
============ ========= =============== =============
|
||||
Flavor VCPUs Disk (in GB) RAM (in MB)
|
||||
============ ========= =============== =============
|
||||
m1.tiny 1 1 512
|
||||
m1.small 1 20 2048
|
||||
m1.medium 2 40 4096
|
||||
m1.large 4 80 8192
|
||||
m1.xlarge 8 160 16384
|
||||
============ ========= =============== =============
|
||||
|
||||
Create flavors
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
#. Log in to the Dashboard and select the :guilabel:`admin` project
|
||||
from the drop-down list.
|
||||
#. In the :guilabel:`Admin` tab, open the :guilabel:`System`
|
||||
tab and click the :guilabel:`Flavors` category.
|
||||
#. Click :guilabel:`Create Flavor`.
|
||||
#. In the :guilabel:`Create Flavor` window, enter or select the
|
||||
parameters for the flavor in the :guilabel:`Flavor Information` tab.
|
||||
|
||||
.. figure:: figures/create_flavor.png
|
||||
|
||||
**Dashboard — Create Flavor**
|
||||
|
||||
========================= =======================================
|
||||
**Name** Enter the flavor name.
|
||||
**ID** Unique ID (integer or UUID) for the
|
||||
new flavor. If specifying 'auto', a
|
||||
UUID will be automatically generated.
|
||||
**VCPUs** Enter the number of virtual CPUs to
|
||||
use.
|
||||
**RAM (MB)** Enter the amount of RAM to use, in
|
||||
megabytes.
|
||||
**Root Disk (GB)** Enter the amount of disk space in
|
||||
gigabytes to use for the root (/)
|
||||
partition.
|
||||
**Ephemeral Disk (GB)** Enter the amount of disk space in
|
||||
gigabytes to use for the ephemeral
|
||||
partition. If unspecified, the value
|
||||
is 0 by default.
|
||||
|
||||
Ephemeral disks offer machine local
|
||||
disk storage linked to the lifecycle
|
||||
of a VM instance. When a VM is
|
||||
terminated, all data on the ephemeral
|
||||
disk is lost. Ephemeral disks are not
|
||||
included in any snapshots.
|
||||
**Swap Disk (MB)** Enter the amount of swap space (in
|
||||
megabytes) to use. If unspecified,
|
||||
the default is 0.
|
||||
**RX/TX Factor** Optional property allows servers with
|
||||
a different bandwidth to be created
|
||||
with the RX/TX Factor. The default
|
||||
value is 1. That is, the new bandwidth
|
||||
is the same as that of the attached
|
||||
network.
|
||||
========================= =======================================
|
||||
|
||||
#. In the :guilabel:`Flavor Access` tab, you can control access to
|
||||
the flavor by moving projects from the :guilabel:`All Projects`
|
||||
column to the :guilabel:`Selected Projects` column.
|
||||
|
||||
Only projects in the :guilabel:`Selected Projects` column can
|
||||
use the flavor. If there are no projects in the right column,
|
||||
all projects can use the flavor.
|
||||
#. Click :guilabel:`Create Flavor`.
|
||||
|
||||
Update flavors
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
#. Log in to the Dashboard and select the :guilabel:`admin` project
|
||||
from the drop-down list.
|
||||
#. In the :guilabel:`Admin` tab, open the :guilabel:`System` tab
|
||||
and click the :guilabel:`Flavors` category.
|
||||
#. Select the flavor that you want to edit. Click :guilabel:`Edit
|
||||
Flavor`.
|
||||
#. In the :guilabel:`Edit Flavor` window, you can change the flavor
|
||||
name, VCPUs, RAM, root disk, ephemeral disk, and swap disk values.
|
||||
#. Click :guilabel:`Save`.
|
||||
|
||||
Update Metadata
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
#. Log in to the Dashboard and select the :guilabel:`admin` project
|
||||
from the drop-down list.
|
||||
#. In the :guilabel:`Admin` tab, open the :guilabel:`System` tab
|
||||
and click the :guilabel:`Flavors` category.
|
||||
#. Select the flavor that you want to update. In the drop-down
|
||||
list, click :guilabel:`Update Metadata` or click :guilabel:`No` or
|
||||
:guilabel:`Yes` in the :guilabel:`Metadata` column.
|
||||
#. In the :guilabel:`Update Flavor Metadata` window, you can customize
|
||||
some metadata keys, then add it to this flavor and set them values.
|
||||
#. Click :guilabel:`Save`.
|
||||
|
||||
**Optional metadata keys**
|
||||
|
||||
+-------------------------------+-------------------------------+
|
||||
| | quota:cpu_shares |
|
||||
| +-------------------------------+
|
||||
| **CPU limits** | quota:cpu_period |
|
||||
| +-------------------------------+
|
||||
| | quota:cpu_limit |
|
||||
| +-------------------------------+
|
||||
| | quota:cpu_reservation |
|
||||
| +-------------------------------+
|
||||
| | quota:cpu_quota |
|
||||
+-------------------------------+-------------------------------+
|
||||
| | quota:disk_read_bytes_sec |
|
||||
| +-------------------------------+
|
||||
| **Disk tuning** | quota:disk_read_iops_sec |
|
||||
| +-------------------------------+
|
||||
| | quota:disk_write_bytes_sec |
|
||||
| +-------------------------------+
|
||||
| | quota:disk_write_iops_sec |
|
||||
| +-------------------------------+
|
||||
| | quota:disk_total_bytes_sec |
|
||||
| +-------------------------------+
|
||||
| | quota:disk_total_iops_sec |
|
||||
+-------------------------------+-------------------------------+
|
||||
| | quota:vif_inbound_average |
|
||||
| +-------------------------------+
|
||||
| **Bandwidth I/O** | quota:vif_inbound_burst |
|
||||
| +-------------------------------+
|
||||
| | quota:vif_inbound_peak |
|
||||
| +-------------------------------+
|
||||
| | quota:vif_outbound_average |
|
||||
| +-------------------------------+
|
||||
| | quota:vif_outbound_burst |
|
||||
| +-------------------------------+
|
||||
| | quota:vif_outbound_peak |
|
||||
+-------------------------------+-------------------------------+
|
||||
| **Watchdog behavior** | hw:watchdog_action |
|
||||
+-------------------------------+-------------------------------+
|
||||
| | hw_rng:allowed |
|
||||
| +-------------------------------+
|
||||
| **Random-number generator** | hw_rng:rate_bytes |
|
||||
| +-------------------------------+
|
||||
| | hw_rng:rate_period |
|
||||
+-------------------------------+-------------------------------+
|
||||
|
||||
For information about supporting metadata keys, see the
|
||||
:ref:`compute-flavors`.
|
||||
|
||||
Delete flavors
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
#. Log in to the Dashboard and select the :guilabel:`admin` project
|
||||
from the drop-down list.
|
||||
#. In the :guilabel:`Admin` tab, open the :guilabel:`System` tab
|
||||
and click the :guilabel:`Flavors` category.
|
||||
#. Select the flavors that you want to delete.
|
||||
#. Click :guilabel:`Delete Flavors`.
|
||||
#. In the :guilabel:`Confirm Delete Flavors` window, click
|
||||
:guilabel:`Delete Flavors` to confirm the deletion. You cannot
|
||||
undo this action.
|
@ -1,77 +0,0 @@
|
||||
=================================
|
||||
Create and manage host aggregates
|
||||
=================================
|
||||
|
||||
Host aggregates enable administrative users to assign key-value pairs to
|
||||
groups of machines.
|
||||
|
||||
Each node can have multiple aggregates and each aggregate can have
|
||||
multiple key-value pairs. You can assign the same key-value pair to
|
||||
multiple aggregates.
|
||||
|
||||
The scheduler uses this information to make scheduling decisions.
|
||||
For information, see
|
||||
`Scheduling <https://docs.openstack.org/ocata/config-reference/compute/schedulers.html>`__.
|
||||
|
||||
To create a host aggregate
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Log in to the Dashboard and select the :guilabel:`admin` project
|
||||
from the drop-down list.
|
||||
|
||||
#. On the :guilabel:`Admin` tab, open the :guilabel:`System` tab and click
|
||||
the :guilabel:`Host Aggregates` category.
|
||||
|
||||
#. Click :guilabel:`Create Host Aggregate`.
|
||||
|
||||
#. In the :guilabel:`Create Host Aggregate` dialog box, enter or select the
|
||||
following values on the :guilabel:`Host Aggregate Information` tab:
|
||||
|
||||
- :guilabel:`Name`: The host aggregate name.
|
||||
|
||||
- :guilabel:`Availability Zone`: The cloud provider defines the default
|
||||
availability zone, such as ``us-west``, ``apac-south``, or
|
||||
``nova``. You can target the host aggregate, as follows:
|
||||
|
||||
- When the host aggregate is exposed as an availability zone,
|
||||
select the availability zone when you launch an instance.
|
||||
|
||||
- When the host aggregate is not exposed as an availability zone,
|
||||
select a flavor and its extra specs to target the host
|
||||
aggregate.
|
||||
|
||||
#. Assign hosts to the aggregate using the :guilabel:`Manage Hosts within
|
||||
Aggregate` tab in the same dialog box.
|
||||
|
||||
To assign a host to the aggregate, click **+** for the host. The host
|
||||
moves from the :guilabel:`All available hosts` list to the
|
||||
:guilabel:`Selected hosts` list.
|
||||
|
||||
You can add one host to one or more aggregates. To add a host to an
|
||||
existing aggregate, edit the aggregate.
|
||||
|
||||
To manage host aggregates
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Select the :guilabel:`admin` project from the drop-down list at the top
|
||||
of the page.
|
||||
|
||||
#. On the :guilabel:`Admin` tab, open the :guilabel:`System` tab and click
|
||||
the :guilabel:`Host Aggregates` category.
|
||||
|
||||
- To edit host aggregates, select the host aggregate that you want
|
||||
to edit. Click :guilabel:`Edit Host Aggregate`.
|
||||
|
||||
In the :guilabel:`Edit Host Aggregate` dialog box, you can change the
|
||||
name and availability zone for the aggregate.
|
||||
|
||||
- To manage hosts, locate the host aggregate that you want to edit
|
||||
in the table. Click :guilabel:`More` and select :guilabel:`Manage Hosts`.
|
||||
|
||||
In the :guilabel:`Add/Remove Hosts to Aggregate` dialog box,
|
||||
click **+** to assign a host to an aggregate. Click **-** to
|
||||
remove a host that is assigned to an aggregate.
|
||||
|
||||
- To delete host aggregates, locate the host aggregate that you want
|
||||
to edit in the table. Click :guilabel:`More` and select
|
||||
:guilabel:`Delete Host Aggregate`.
|
@ -1,115 +0,0 @@
|
||||
========================
|
||||
Create and manage images
|
||||
========================
|
||||
|
||||
As an administrative user, you can create and manage images
|
||||
for the projects to which you belong. You can also create
|
||||
and manage images for users in all projects to which you have
|
||||
access.
|
||||
|
||||
To create and manage images in specified projects as an end
|
||||
user, see the `upload and manage images with Dashboard in
|
||||
OpenStack End User Guide
|
||||
<https://docs.openstack.org/user-guide/dashboard-manage-images.html>`_
|
||||
and `manage images with CLI in OpenStack End User Guide
|
||||
<https://docs.openstack.org/user-guide/common/cli-manage-images.html>`_ .
|
||||
|
||||
To create and manage images as an administrator for other
|
||||
users, use the following procedures.
|
||||
|
||||
Create images
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
For details about image creation, see the `Virtual Machine Image
|
||||
Guide <https://docs.openstack.org/image-guide/>`_.
|
||||
|
||||
#. Log in to the Dashboard and select the :guilabel:`admin` project
|
||||
from the drop-down list.
|
||||
#. On the :guilabel:`Admin` tab, open the :guilabel:`System` tab
|
||||
and click the :guilabel:`Images` category. The images that you
|
||||
can administer for cloud users appear on this page.
|
||||
#. Click :guilabel:`Create Image`, which opens the
|
||||
:guilabel:`Create An Image` window.
|
||||
|
||||
.. figure:: figures/create_image.png
|
||||
|
||||
**Figure Dashboard — Create Image**
|
||||
|
||||
#. In the :guilabel:`Create An Image` window, enter or select the
|
||||
following values:
|
||||
|
||||
+-------------------------------+---------------------------------+
|
||||
| :guilabel:`Name` | Enter a name for the image. |
|
||||
+-------------------------------+---------------------------------+
|
||||
| :guilabel:`Description` | Enter a brief description of |
|
||||
| | the image. |
|
||||
+-------------------------------+---------------------------------+
|
||||
| :guilabel:`Image Source` | Choose the image source from |
|
||||
| | the dropdown list. Your choices |
|
||||
| | are :guilabel:`Image Location` |
|
||||
| | and :guilabel:`Image File`. |
|
||||
+-------------------------------+---------------------------------+
|
||||
| :guilabel:`Image File` or | Based on your selection, there |
|
||||
| :guilabel:`Image Location` | is an :guilabel:`Image File` or |
|
||||
| | :guilabel:`Image Location` |
|
||||
| | field. You can include the |
|
||||
| | location URL or browse for the |
|
||||
| | image file on your file system |
|
||||
| | and add it. |
|
||||
+-------------------------------+---------------------------------+
|
||||
| :guilabel:`Format` | Select the image format. |
|
||||
+-------------------------------+---------------------------------+
|
||||
| :guilabel:`Architecture` | Specify the architecture. For |
|
||||
| | example, ``i386`` for a 32-bit |
|
||||
| | architecture or ``x86_64`` for |
|
||||
| | a 64-bit architecture. |
|
||||
+-------------------------------+---------------------------------+
|
||||
| :guilabel:`Minimum Disk (GB)` | Leave this field empty. |
|
||||
+-------------------------------+---------------------------------+
|
||||
| :guilabel:`Minimum RAM (MB)` | Leave this field empty. |
|
||||
+-------------------------------+---------------------------------+
|
||||
| :guilabel:`Copy Data` | Specify this option to copy |
|
||||
| | image data to the Image service.|
|
||||
+-------------------------------+---------------------------------+
|
||||
| :guilabel:`Public` | Select this option to make the |
|
||||
| | image public to all users. |
|
||||
+-------------------------------+---------------------------------+
|
||||
| :guilabel:`Protected` | Select this option to ensure |
|
||||
| | that only users with |
|
||||
| | permissions can delete it. |
|
||||
+-------------------------------+---------------------------------+
|
||||
|
||||
#. Click :guilabel:`Create Image`.
|
||||
|
||||
The image is queued to be uploaded. It might take several minutes
|
||||
before the status changes from ``Queued`` to ``Active``.
|
||||
|
||||
Update images
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
#. Log in to the Dashboard and select the :guilabel:`admin` project from the
|
||||
drop-down list.
|
||||
#. On the :guilabel:`Admin` tab, open the :guilabel:`System` tab
|
||||
and click the :guilabel:`Images` category.
|
||||
#. Select the images that you want to edit. Click :guilabel:`Edit Image`.
|
||||
#. In the :guilabel:`Edit Image` window, you can change the image name.
|
||||
|
||||
Select the :guilabel:`Public` check box to make the image public.
|
||||
Clear this check box to make the image private. You cannot change
|
||||
the :guilabel:`Kernel ID`, :guilabel:`Ramdisk ID`, or
|
||||
:guilabel:`Architecture` attributes for an image.
|
||||
#. Click :guilabel:`Edit Image`.
|
||||
|
||||
Delete images
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
#. Log in to the Dashboard and select the :guilabel:`admin` project from the
|
||||
drop-down list.
|
||||
#. On the :guilabel:`Admin tab`, open the :guilabel:`System` tab
|
||||
and click the :guilabel:`Images` category.
|
||||
#. Select the images that you want to delete.
|
||||
#. Click :guilabel:`Delete Images`.
|
||||
#. In the :guilabel:`Confirm Delete Images` window, click :guilabel:`Delete
|
||||
Images` to confirm the deletion.
|
||||
|
||||
You cannot undo this action.
|
@ -1,77 +0,0 @@
|
||||
================
|
||||
Manage instances
|
||||
================
|
||||
|
||||
As an administrative user, you can manage instances for users in various
|
||||
projects. You can view, terminate, edit, perform a soft or hard reboot,
|
||||
create a snapshot from, and migrate instances. You can also view the
|
||||
logs for instances or launch a VNC console for an instance.
|
||||
|
||||
For information about using the Dashboard to launch instances as an end
|
||||
user, see the `OpenStack End User Guide <https://docs.openstack.org/user-guide/dashboard-launch-instances.html>`__.
|
||||
|
||||
Create instance snapshots
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Log in to the Dashboard and select the :guilabel:`admin` project from the
|
||||
drop-down list.
|
||||
|
||||
#. On the :guilabel:`Admin` tab, open the :guilabel:`System` tab
|
||||
and click the :guilabel:`Instances` category.
|
||||
|
||||
#. Select an instance to create a snapshot from it. From the
|
||||
Actions drop-down list, select :guilabel:`Create Snapshot`.
|
||||
|
||||
#. In the :guilabel:`Create Snapshot` window, enter a name for the snapshot.
|
||||
|
||||
#. Click :guilabel:`Create Snapshot`. The Dashboard shows the instance snapshot
|
||||
in the :guilabel:`Images` category.
|
||||
|
||||
#. To launch an instance from the snapshot, select the snapshot and
|
||||
click :guilabel:`Launch`. For information about launching
|
||||
instances, see the
|
||||
`OpenStack End User Guide <https://docs.openstack.org/user-guide/dashboard-launch-instances.html>`__.
|
||||
|
||||
Control the state of an instance
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Log in to the Dashboard and select the :guilabel:`admin` project from the
|
||||
drop-down list.
|
||||
|
||||
#. On the :guilabel:`Admin` tab, open the :guilabel:`System` tab
|
||||
and click the :guilabel:`Instances` category.
|
||||
|
||||
#. Select the instance for which you want to change the state.
|
||||
|
||||
#. From the drop-down list in the Actions column,
|
||||
select the state.
|
||||
|
||||
Depending on the current state of the instance, you can perform various
|
||||
actions on the instance. For example, pause, un-pause, suspend, resume,
|
||||
soft or hard reboot, or terminate (actions in red are dangerous).
|
||||
|
||||
.. figure:: figures/change_instance_state.png
|
||||
:width: 100%
|
||||
|
||||
**Figure Dashboard — Instance Actions**
|
||||
|
||||
|
||||
Track usage
|
||||
~~~~~~~~~~~
|
||||
|
||||
Use the :guilabel:`Overview` category to track usage of instances
|
||||
for each project.
|
||||
|
||||
You can track costs per month by showing meters like number of VCPUs,
|
||||
disks, RAM, and uptime of all your instances.
|
||||
|
||||
#. Log in to the Dashboard and select the :guilabel:`admin` project from the
|
||||
drop-down list.
|
||||
|
||||
#. On the :guilabel:`Admin` tab, open the :guilabel:`System` tab
|
||||
and click the :guilabel:`Overview` category.
|
||||
|
||||
#. Select a month and click :guilabel:`Submit` to query the instance usage for
|
||||
that month.
|
||||
|
||||
#. Click :guilabel:`Download CSV Summary` to download a CSV summary.
|
@ -1,102 +0,0 @@
|
||||
Manage projects and users
|
||||
=========================
|
||||
|
||||
OpenStack administrators can create projects, and create accounts for new users
|
||||
using the OpenStack Dasboard. Projects own specific resources in your
|
||||
OpenStack environment. You can associate users with roles, projects, or both.
|
||||
|
||||
Add a new project
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Log into the OpenStack Dashboard as the Admin user.
|
||||
#. Click on the :guilabel:`Identity` label on the left column, and click
|
||||
:guilabel:`Projects`.
|
||||
#. Select the :guilabel:`Create Project` push button.
|
||||
The :guilabel:`Create Project` window will open.
|
||||
#. Enter the Project name and description. Leave the :guilabel:`Domain ID`
|
||||
field set at *default*.
|
||||
#. Click :guilabel:`Create Project`.
|
||||
|
||||
.. note::
|
||||
|
||||
Your new project will appear in the list of projects displayed under the
|
||||
:guilabel:`Projects` page of the dashboard. Projects are listed in
|
||||
alphabetical order, and you can check on the **Project ID**, **Domain
|
||||
name**, and status of the project in this section.
|
||||
|
||||
Delete a project
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Log into the OpenStack Dashboard as the Admin user.
|
||||
#. Click on the :guilabel:`Identity` label on the left column, and click
|
||||
:guilabel:`Projects`.
|
||||
#. Select the checkbox to the left of the project you would like to delete.
|
||||
#. Click on the :guilabel:`Delete Projects` push button.
|
||||
|
||||
Update a project
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Log into the OpenStack Dashboard as the Admin user.
|
||||
#. Click on the :guilabel:`Identity` label on the left column, and click
|
||||
:guilabel:`Projects`.
|
||||
#. Locate the project you wish to update, and under the :guilabel:`Actions`
|
||||
column click on the drop down arrow next to the :guilabel:`Manage Members`
|
||||
push button. The :guilabel:`Update Project` window will open.
|
||||
#. Update the name of the project, enable the project, or disable the project
|
||||
as needed.
|
||||
|
||||
Add a new user
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
#. Log into the OpenStack Dashboard as the Admin user.
|
||||
#. Click on the :guilabel:`Identity` label on the left column, and click
|
||||
:guilabel:`Users`.
|
||||
#. Click :guilabel:`Create User`.
|
||||
#. Enter a :guilabel:`Domain Name`, the :guilabel:`Username`, and a
|
||||
:guilabel:`password` for the new user. Enter an email for the new user,
|
||||
and specify which :guilabel:`Primary Project` they belong to. Leave the
|
||||
:guilabel:`Domain ID` field set at *default*. You can also enter a
|
||||
decription for the new user.
|
||||
#. Click the :guilabel:`Create User` push button.
|
||||
|
||||
.. note::
|
||||
|
||||
The new user will then appear in the list of projects displayed under
|
||||
the :guilabel:`Users` page of the dashboard. You can check on the
|
||||
**User Name**, **User ID**, **Domain name**, and the User status in this
|
||||
section.
|
||||
|
||||
Delete a new user
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Log into the OpenStack Dashboard as the Admin user.
|
||||
#. Click on the :guilabel:`Identity` label on the left column, and click
|
||||
:guilabel:`Users`.
|
||||
#. Select the checkbox to the left of the user you would like to delete.
|
||||
#. Click on the :guilabel:`Delete Users` push button.
|
||||
|
||||
Update a user
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
#. Log into the OpenStack Dashboard as the Admin user.
|
||||
#. Click on the :guilabel:`Identity` label on the left column, and click
|
||||
:guilabel:`Users`.
|
||||
#. Locate the User you would like to update, and select the :guilabel:`Edit`
|
||||
push button under the :guilabel:`Actions` column.
|
||||
#. Adjust the :guilabel:`Domain Name`, :guilabel:`User Name`,
|
||||
:guilabel:`Description`, :guilabel:`Email`, and :guilabel:`Primary Project`.
|
||||
|
||||
Enable or disable a user
|
||||
------------------------
|
||||
|
||||
#. Log into the OpenStack Dashboard as the Admin user.
|
||||
#. Click on the :guilabel:`Identity` label on the left column, and click
|
||||
:guilabel:`Users`.
|
||||
#. Locate the User you would like to update, and select the arrow to the right
|
||||
of the :guilabel:`Edit` push button. This will open a drop down menu.
|
||||
#. Select :guilabel:`Disable User`.
|
||||
|
||||
.. note::
|
||||
|
||||
To reactivate a disabled user, select :guilabel:`Enable User` under
|
||||
the drop down menu.
|
@ -1,10 +0,0 @@
|
||||
====================
|
||||
View cloud resources
|
||||
====================
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
dashboard-manage-services.rst
|
||||
dashboard-view-cloud-resources.rst
|
@ -1,37 +0,0 @@
|
||||
=========================
|
||||
View services information
|
||||
=========================
|
||||
|
||||
As an administrative user, you can view information for OpenStack services.
|
||||
|
||||
#. Log in to the Dashboard and select the
|
||||
:guilabel:`admin` project from the drop-down list.
|
||||
|
||||
#. On the :guilabel:`Admin` tab, open the :guilabel:`System` tab
|
||||
and click the :guilabel:`System Information` category.
|
||||
|
||||
View the following information on these tabs:
|
||||
|
||||
* :guilabel:`Services`:
|
||||
Displays the internal name and the public OpenStack name
|
||||
for each service, the host on which the service runs,
|
||||
and whether or not the service is enabled.
|
||||
|
||||
* :guilabel:`Compute Services`:
|
||||
Displays information specific to the Compute service. Both host
|
||||
and zone are listed for each service, as well as its
|
||||
activation status.
|
||||
|
||||
* :guilabel:`Block Storage Services`:
|
||||
Displays information specific to the Block Storage service. Both host
|
||||
and zone are listed for each service, as well as its
|
||||
activation status.
|
||||
|
||||
* :guilabel:`Network Agents`:
|
||||
Displays the network agents active within the cluster, such as L3 and
|
||||
DHCP agents, and the status of each agent.
|
||||
|
||||
* :guilabel:`Orchestration Services`:
|
||||
Displays information specific to the Orchestration service. Name,
|
||||
engine id, host and topic are listed for each service, as well as its
|
||||
activation status.
|
@ -1,149 +0,0 @@
|
||||
=============================
|
||||
Manage shares and share types
|
||||
=============================
|
||||
|
||||
Shares are file storage that instances can access. Users can
|
||||
allow or deny a running instance to have access to a share at any time.
|
||||
For information about using the Dashboard to create and manage shares as
|
||||
an end user, see the
|
||||
`OpenStack End User Guide <https://docs.openstack.org/user-guide/dashboard-manage-shares.html>`_.
|
||||
|
||||
As an administrative user, you can manage shares and share types for users
|
||||
in various projects. You can create and delete share types, and view
|
||||
or delete shares.
|
||||
|
||||
.. _create-a-share-type:
|
||||
|
||||
Create a share type
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Log in to the Dashboard and choose the :guilabel:`admin`
|
||||
project from the drop-down list.
|
||||
|
||||
#. On the :guilabel:`Admin` tab, open the :guilabel:`System` tab
|
||||
and click the :guilabel:`Shares` category.
|
||||
|
||||
#. Click the :guilabel:`Share Types` tab, and click
|
||||
:guilabel:`Create Share Type` button. In the
|
||||
:guilabel:`Create Share Type` window, enter or select the
|
||||
following values.
|
||||
|
||||
:guilabel:`Name`: Enter a name for the share type.
|
||||
|
||||
:guilabel:`Driver handles share servers`: Choose True or False
|
||||
|
||||
:guilabel:`Extra specs`: To add extra specs, use key=value.
|
||||
|
||||
#. Click :guilabel:`Create Share Type` button to confirm your changes.
|
||||
|
||||
.. note::
|
||||
|
||||
A message indicates whether the action succeeded.
|
||||
|
||||
Update share type
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Log in to the Dashboard and choose the :guilabel:`admin` project from
|
||||
the drop-down list.
|
||||
|
||||
#. On the :guilabel:`Admin` tab, open the :guilabel:`System` tab
|
||||
and click the :guilabel:`Shares` category.
|
||||
|
||||
#. Click the :guilabel:`Share Types` tab, select the share type
|
||||
that you want to update.
|
||||
|
||||
#. Select :guilabel:`Update Share Type` from Actions.
|
||||
|
||||
#. In the :guilabel:`Update Share Type` window, update extra specs.
|
||||
|
||||
:guilabel:`Extra specs`: To add extra specs, use key=value.
|
||||
To unset extra specs, use key.
|
||||
|
||||
#. Click :guilabel:`Update Share Type` button to confirm your changes.
|
||||
|
||||
.. note::
|
||||
|
||||
A message indicates whether the action succeeded.
|
||||
|
||||
Delete share types
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
When you delete a share type, shares of that type are not deleted.
|
||||
|
||||
#. Log in to the Dashboard and choose the :guilabel:`admin` project from
|
||||
the drop-down list.
|
||||
|
||||
#. On the :guilabel:`Admin` tab, open the :guilabel:`System` tab
|
||||
and click the :guilabel:`Shares` category.
|
||||
|
||||
#. Click the :guilabel:`Share Types` tab, select the share type
|
||||
or types that you want to delete.
|
||||
|
||||
#. Click :guilabel:`Delete Share Types` button.
|
||||
|
||||
#. In the :guilabel:`Confirm Delete Share Types` window, click the
|
||||
:guilabel:`Delete Share Types` button to confirm the action.
|
||||
|
||||
.. note::
|
||||
|
||||
A message indicates whether the action succeeded.
|
||||
|
||||
Delete shares
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
#. Log in to the Dashboard and choose the :guilabel:`admin` project
|
||||
from the drop-down list.
|
||||
|
||||
#. On the :guilabel:`Admin` tab, open the :guilabel:`System` tab
|
||||
and click the :guilabel:`Shares` category.
|
||||
|
||||
#. Select the share or shares that you want to delete.
|
||||
|
||||
#. Click :guilabel:`Delete Shares` button.
|
||||
|
||||
#. In the :guilabel:`Confirm Delete Shares` window, click the
|
||||
:guilabel:`Delete Shares` button to confirm the action.
|
||||
|
||||
.. note::
|
||||
|
||||
A message indicates whether the action succeeded.
|
||||
|
||||
Delete share server
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Log in to the Dashboard and choose the :guilabel:`admin` project
|
||||
from the drop-down list.
|
||||
|
||||
#. On the :guilabel:`Admin` tab, open the :guilabel:`System` tab
|
||||
and click the :guilabel:`Share Servers` category.
|
||||
|
||||
#. Select the share that you want to delete.
|
||||
|
||||
#. Click :guilabel:`Delete Share Server` button.
|
||||
|
||||
#. In the :guilabel:`Confirm Delete Share Server` window, click the
|
||||
:guilabel:`Delete Share Server` button to confirm the action.
|
||||
|
||||
.. note::
|
||||
|
||||
A message indicates whether the action succeeded.
|
||||
|
||||
Delete share networks
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Log in to the Dashboard and choose the :guilabel:`admin` project
|
||||
from the drop-down list.
|
||||
|
||||
#. On the :guilabel:`Admin` tab, open the :guilabel:`System` tab
|
||||
and click the :guilabel:`Share Networks` category.
|
||||
|
||||
#. Select the share network or share networks that you want to delete.
|
||||
|
||||
#. Click :guilabel:`Delete Share Networks` button.
|
||||
|
||||
#. In the :guilabel:`Confirm Delete Share Networks` window, click the
|
||||
:guilabel:`Delete Share Networks` button to confirm the action.
|
||||
|
||||
.. note::
|
||||
|
||||
A message indicates whether the action succeeded.
|
@ -1,168 +0,0 @@
|
||||
===============================
|
||||
Manage volumes and volume types
|
||||
===============================
|
||||
|
||||
Volumes are the Block Storage devices that you attach to instances to enable
|
||||
persistent storage. Users can attach a volume to a running instance or detach
|
||||
a volume and attach it to another instance at any time. For information about
|
||||
using the dashboard to create and manage volumes as an end user, see the
|
||||
`OpenStack End User Guide <https://docs.openstack.org/user-guide/dashboard-manage-volumes.html>`_.
|
||||
|
||||
As an administrative user, you can manage volumes and volume types for users
|
||||
in various projects. You can create and delete volume types, and you can view
|
||||
and delete volumes. Note that a volume can be encrypted by using the steps
|
||||
outlined below.
|
||||
|
||||
.. _create-a-volume-type:
|
||||
|
||||
Create a volume type
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Log in to the dashboard and select the :guilabel:`admin`
|
||||
project from the drop-down list.
|
||||
|
||||
#. On the :guilabel:`Admin` tab, open the :guilabel:`System` tab
|
||||
and click the :guilabel:`Volumes` category.
|
||||
|
||||
#. Click the :guilabel:`Volume Types` tab, and click
|
||||
:guilabel:`Create Volume Type` button. In the
|
||||
:guilabel:`Create Volume Type` window, enter a name for the volume type.
|
||||
|
||||
#. Click :guilabel:`Create Volume Type` button to confirm your changes.
|
||||
|
||||
.. note::
|
||||
|
||||
A message indicates whether the action succeeded.
|
||||
|
||||
Create an encrypted volume type
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Create a volume type using the steps above for :ref:`create-a-volume-type`.
|
||||
|
||||
#. Click :guilabel:`Create Encryption` in the Actions column of the newly
|
||||
created volume type.
|
||||
|
||||
#. Configure the encrypted volume by setting the parameters below from
|
||||
available options (see table):
|
||||
|
||||
Provider
|
||||
Specifies the class responsible for configuring the encryption.
|
||||
Control Location
|
||||
Specifies whether the encryption is from the front end (nova) or the
|
||||
back end (cinder).
|
||||
Cipher
|
||||
Specifies the encryption algorithm.
|
||||
Key Size (bits)
|
||||
Specifies the encryption key size.
|
||||
|
||||
#. Click :guilabel:`Create Volume Type Encryption`.
|
||||
|
||||
.. figure:: figures/create_volume_type_encryption.png
|
||||
|
||||
**Encryption Options**
|
||||
|
||||
The table below provides a few alternatives available for creating encrypted
|
||||
volumes.
|
||||
|
||||
+--------------------+-----------------------+----------------------------+
|
||||
| Encryption | Parameter | Comments |
|
||||
| parameters | options | |
|
||||
+====================+=======================+============================+
|
||||
| Provider |nova.volume.encryptors.|Allows easier import and |
|
||||
| |luks.LuksEncryptor |migration of imported |
|
||||
| |(Recommended) |encrypted volumes, and |
|
||||
| | |allows access key to be |
|
||||
| | |changed without |
|
||||
| | |re-encrypting the volume |
|
||||
+ +-----------------------+----------------------------+
|
||||
| |nova.volume.encryptors.|Less disk overhead than |
|
||||
| |cryptsetup. |LUKS |
|
||||
| |CryptsetupEncryptor | |
|
||||
+--------------------+-----------------------+----------------------------+
|
||||
| Control Location | front-end |The encryption occurs within|
|
||||
| | (Recommended) |nova so that the data |
|
||||
| | |transmitted over the network|
|
||||
| | |is encrypted |
|
||||
| | | |
|
||||
+ +-----------------------+----------------------------+
|
||||
| | back-end |This could be selected if a |
|
||||
| | |cinder plug-in supporting |
|
||||
| | |an encrypted back-end block |
|
||||
| | |storage device becomes |
|
||||
| | |available in the future. |
|
||||
| | |TLS or other network |
|
||||
| | |encryption would also be |
|
||||
| | |needed to protect data as it|
|
||||
| | |traverses the network |
|
||||
+--------------------+-----------------------+----------------------------+
|
||||
| Cipher | aes-xts-plain64 |See NIST reference below |
|
||||
| | (Recommended) |to see advantages* |
|
||||
+ +-----------------------+----------------------------+
|
||||
| | aes-cbc-essiv |Note: On the command line, |
|
||||
| | |type 'cryptsetup benchmark' |
|
||||
| | |for additional options |
|
||||
+--------------------+-----------------------+----------------------------+
|
||||
| Key Size (bits)| 512 (Recommended for |Using this selection for |
|
||||
| | aes-xts-plain64. 256 |aes-xts, the underlying key |
|
||||
| | should be used for |size would only be 256-bits*|
|
||||
| | aes-cbc-essiv) | |
|
||||
+ +-----------------------+----------------------------+
|
||||
| | 256 |Using this selection for |
|
||||
| | |aes-xts, the underlying key |
|
||||
| | |size would only be 128-bits*|
|
||||
+--------------------+-----------------------+----------------------------+
|
||||
|
||||
`*` Source `NIST SP 800-38E <http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-38e.pdf>`_
|
||||
|
||||
.. note::
|
||||
|
||||
To see further information and CLI instructions, see
|
||||
`Create an encrypted volume type
|
||||
<https://docs.openstack.org/ocata/config-reference/block-storage/volume-encryption.html>`_
|
||||
in the OpenStack Configuration Reference.
|
||||
|
||||
Delete volume types
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
When you delete a volume type, volumes of that type are not deleted.
|
||||
|
||||
#. Log in to the dashboard and select the :guilabel:`admin` project from
|
||||
the drop-down list.
|
||||
|
||||
#. On the :guilabel:`Admin` tab, open the :guilabel:`System` tab
|
||||
and click the :guilabel:`Volumes` category.
|
||||
|
||||
#. Click the :guilabel:`Volume Types` tab, select the volume type
|
||||
or types that you want to delete.
|
||||
|
||||
#. Click :guilabel:`Delete Volume Types` button.
|
||||
|
||||
#. In the :guilabel:`Confirm Delete Volume Types` window, click the
|
||||
:guilabel:`Delete Volume Types` button to confirm the action.
|
||||
|
||||
.. note::
|
||||
|
||||
A message indicates whether the action succeeded.
|
||||
|
||||
Delete volumes
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
When you delete an instance, the data of its attached volumes is not
|
||||
destroyed.
|
||||
|
||||
#. Log in to the dashboard and select the :guilabel:`admin` project
|
||||
from the drop-down list.
|
||||
|
||||
#. On the :guilabel:`Admin` tab, open the :guilabel:`System` tab
|
||||
and click the :guilabel:`Volumes` category.
|
||||
|
||||
#. Select the volume or volumes that you want to delete.
|
||||
|
||||
#. Click :guilabel:`Delete Volumes` button.
|
||||
|
||||
#. In the :guilabel:`Confirm Delete Volumes` window, click the
|
||||
:guilabel:`Delete Volumes` button to confirm the action.
|
||||
|
||||
.. note::
|
||||
|
||||
A message indicates whether the action succeeded.
|
@ -1,216 +0,0 @@
|
||||
========================================
|
||||
Set up session storage for the Dashboard
|
||||
========================================
|
||||
|
||||
The Dashboard uses `Django sessions
|
||||
framework <https://docs.djangoproject.com/en/dev/topics/http/sessions/>`__
|
||||
to handle user session data. However, you can use any available session
|
||||
back end. You customize the session back end through the
|
||||
``SESSION_ENGINE`` setting in your ``local_settings.py`` file.
|
||||
|
||||
After architecting and implementing the core OpenStack
|
||||
services and other required services, combined with the Dashboard
|
||||
service steps below, users and administrators can use
|
||||
the OpenStack dashboard. Refer to the `OpenStack Dashboard
|
||||
<https://docs.openstack.org/user-guide/dashboard.html>`__
|
||||
chapter of the OpenStack End User Guide for
|
||||
further instructions on logging in to the Dashboard.
|
||||
|
||||
The following sections describe the pros and cons of each option as it
|
||||
pertains to deploying the Dashboard.
|
||||
|
||||
Local memory cache
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Local memory storage is the quickest and easiest session back end to set
|
||||
up, as it has no external dependencies whatsoever. It has the following
|
||||
significant drawbacks:
|
||||
|
||||
- No shared storage across processes or workers.
|
||||
- No persistence after a process terminates.
|
||||
|
||||
The local memory back end is enabled as the default for Horizon solely
|
||||
because it has no dependencies. It is not recommended for production
|
||||
use, or even for serious development work.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
|
||||
CACHES = {
|
||||
'default' : {
|
||||
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache'
|
||||
}
|
||||
}
|
||||
|
||||
You can use applications such as ``Memcached`` or ``Redis`` for external
|
||||
caching. These applications offer persistence and shared storage and are
|
||||
useful for small-scale deployments and development.
|
||||
|
||||
Memcached
|
||||
---------
|
||||
|
||||
Memcached is a high-performance and distributed memory object caching
|
||||
system providing in-memory key-value store for small chunks of arbitrary
|
||||
data.
|
||||
|
||||
Requirements:
|
||||
|
||||
- Memcached service running and accessible.
|
||||
- Python module ``python-memcached`` installed.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
|
||||
CACHES = {
|
||||
'default': {
|
||||
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
|
||||
'LOCATION': 'my_memcached_host:11211',
|
||||
}
|
||||
}
|
||||
|
||||
Redis
|
||||
-----
|
||||
|
||||
Redis is an open source, BSD licensed, advanced key-value store. It is
|
||||
often referred to as a data structure server.
|
||||
|
||||
Requirements:
|
||||
|
||||
- Redis service running and accessible.
|
||||
- Python modules ``redis`` and ``django-redis`` installed.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
|
||||
CACHES = {
|
||||
"default": {
|
||||
"BACKEND": "redis_cache.cache.RedisCache",
|
||||
"LOCATION": "127.0.0.1:6379:1",
|
||||
"OPTIONS": {
|
||||
"CLIENT_CLASS": "redis_cache.client.DefaultClient",
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Initialize and configure the database
|
||||
-------------------------------------
|
||||
|
||||
Database-backed sessions are scalable, persistent, and can be made
|
||||
high-concurrency and highly available.
|
||||
|
||||
However, database-backed sessions are one of the slower session storages
|
||||
and incur a high overhead under heavy usage. Proper configuration of
|
||||
your database deployment can also be a substantial undertaking and is
|
||||
far beyond the scope of this documentation.
|
||||
|
||||
#. Start the MySQL command-line client.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ mysql -u root -p
|
||||
|
||||
#. Enter the MySQL root user's password when prompted.
|
||||
#. To configure the MySQL database, create the dash database.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
mysql> CREATE DATABASE dash;
|
||||
|
||||
#. Create a MySQL user for the newly created dash database that has full
|
||||
control of the database. Replace DASH\_DBPASS with a password for the
|
||||
new user.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
mysql> GRANT ALL PRIVILEGES ON dash.* TO 'dash'@'%' IDENTIFIED BY 'DASH_DBPASS';
|
||||
mysql> GRANT ALL PRIVILEGES ON dash.* TO 'dash'@'localhost' IDENTIFIED BY 'DASH_DBPASS';
|
||||
|
||||
#. Enter ``quit`` at the ``mysql>`` prompt to exit MySQL.
|
||||
|
||||
#. In the ``local_settings.py`` file, change these options:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
SESSION_ENGINE = 'django.contrib.sessions.backends.db'
|
||||
DATABASES = {
|
||||
'default': {
|
||||
# Database configuration here
|
||||
'ENGINE': 'django.db.backends.mysql',
|
||||
'NAME': 'dash',
|
||||
'USER': 'dash',
|
||||
'PASSWORD': 'DASH_DBPASS',
|
||||
'HOST': 'localhost',
|
||||
'default-character-set': 'utf8'
|
||||
}
|
||||
}
|
||||
|
||||
#. After configuring the ``local_settings.py`` file as shown, you can run the
|
||||
:command:`manage.py syncdb` command to populate this newly created
|
||||
database.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# /usr/share/openstack-dashboard/manage.py syncdb
|
||||
|
||||
#. The following output is returned:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Installing custom SQL ...
|
||||
Installing indexes ...
|
||||
DEBUG:django.db.backends:(0.008) CREATE INDEX `django_session_c25c2c28` ON `django_session` (`expire_date`);; args=()
|
||||
No fixtures found.
|
||||
|
||||
#. To avoid a warning when you restart Apache on Ubuntu, create a
|
||||
``blackhole`` directory in the Dashboard directory, as follows.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# mkdir -p /var/lib/dash/.blackhole
|
||||
|
||||
#. Restart the Apache service.
|
||||
|
||||
#. On Ubuntu, restart the ``nova-api`` service to ensure that the API server
|
||||
can connect to the Dashboard without error.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service nova-api restart
|
||||
|
||||
Cached database
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
To mitigate the performance issues of database queries, you can use the
|
||||
Django ``cached_db`` session back end, which utilizes both your database
|
||||
and caching infrastructure to perform write-through caching and
|
||||
efficient retrieval.
|
||||
|
||||
Enable this hybrid setting by configuring both your database and cache,
|
||||
as discussed previously. Then, set the following value:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
SESSION_ENGINE = "django.contrib.sessions.backends.cached_db"
|
||||
|
||||
Cookies
|
||||
~~~~~~~
|
||||
|
||||
If you use Django 1.4 or later, the ``signed_cookies`` back end avoids
|
||||
server load and scaling problems.
|
||||
|
||||
This back end stores session data in a cookie, which is stored by the
|
||||
user's browser. The back end uses a cryptographic signing technique to
|
||||
ensure session data is not tampered with during transport. This is not
|
||||
the same as encryption; session data is still readable by an attacker.
|
||||
|
||||
The pros of this engine are that it requires no additional dependencies
|
||||
or infrastructure overhead, and it scales indefinitely as long as the
|
||||
quantity of session data being stored fits into a normal cookie.
|
||||
|
||||
The biggest downside is that it places session data into storage on the
|
||||
user's machine and transports it over the wire. It also limits the
|
||||
quantity of session data that can be stored.
|
||||
|
||||
See the Django `cookie-based
|
||||
sessions <https://docs.djangoproject.com/en/dev/topics/http/sessions/#using-cookie-based-sessions>`__
|
||||
documentation.
|
@ -1,117 +0,0 @@
|
||||
.. _dashboard-set-quotas:
|
||||
|
||||
======================
|
||||
View and manage quotas
|
||||
======================
|
||||
|
||||
.. |nbsp| unicode:: 0xA0 .. nbsp
|
||||
:trim:
|
||||
|
||||
To prevent system capacities from being exhausted without notification,
|
||||
you can set up quotas. Quotas are operational limits. For example, the
|
||||
number of gigabytes allowed for each project can be controlled so that
|
||||
cloud resources are optimized. Quotas can be enforced at both the project
|
||||
and the project-user level.
|
||||
|
||||
Typically, you change quotas when a project needs more than ten
|
||||
volumes or 1 |nbsp| TB on a compute node.
|
||||
|
||||
Using the Dashboard, you can view default Compute and Block Storage
|
||||
quotas for new projects, as well as update quotas for existing projects.
|
||||
|
||||
.. note::
|
||||
|
||||
Using the command-line interface, you can manage quotas for the
|
||||
OpenStack Compute service, the OpenStack Block Storage service, and
|
||||
the OpenStack Networking service (see `OpenStack Administrator Guide
|
||||
<https://docs.openstack.org/admin-guide/cli-set-quotas.html>`_).
|
||||
Additionally, you can update Compute service quotas for
|
||||
project users.
|
||||
|
||||
The following table describes the Compute and Block Storage service quotas:
|
||||
|
||||
.. _compute_quotas:
|
||||
|
||||
**Quota Descriptions**
|
||||
|
||||
+--------------------+------------------------------------+---------------+
|
||||
| Quota Name | Defines the number of | Service |
|
||||
+====================+====================================+===============+
|
||||
| Gigabytes | Volume gigabytes allowed for | Block Storage |
|
||||
| | each project. | |
|
||||
+--------------------+------------------------------------+---------------+
|
||||
| Instances | Instances allowed for each | Compute |
|
||||
| | project. | |
|
||||
+--------------------+------------------------------------+---------------+
|
||||
| Injected Files | Injected files allowed for each | Compute |
|
||||
| | project. | |
|
||||
+--------------------+------------------------------------+---------------+
|
||||
| Injected File | Content bytes allowed for each | Compute |
|
||||
| Content Bytes | injected file. | |
|
||||
+--------------------+------------------------------------+---------------+
|
||||
| Keypairs | Number of keypairs. | Compute |
|
||||
+--------------------+------------------------------------+---------------+
|
||||
| Metadata Items | Metadata items allowed for each | Compute |
|
||||
| | instance. | |
|
||||
+--------------------+------------------------------------+---------------+
|
||||
| RAM (MB) | RAM megabytes allowed for | Compute |
|
||||
| | each instance. | |
|
||||
+--------------------+------------------------------------+---------------+
|
||||
| Security Groups | Security groups allowed for each | Compute |
|
||||
| | project. | |
|
||||
+--------------------+------------------------------------+---------------+
|
||||
| Security Group | Security group rules allowed for | Compute |
|
||||
| Rules | each project. | |
|
||||
+--------------------+------------------------------------+---------------+
|
||||
| Snapshots | Volume snapshots allowed for | Block Storage |
|
||||
| | each project. | |
|
||||
+--------------------+------------------------------------+---------------+
|
||||
| VCPUs | Instance cores allowed for each | Compute |
|
||||
| | project. | |
|
||||
+--------------------+------------------------------------+---------------+
|
||||
| Volumes | Volumes allowed for each | Block Storage |
|
||||
| | project. | |
|
||||
+--------------------+------------------------------------+---------------+
|
||||
|
||||
.. _dashboard_view_quotas_procedure:
|
||||
|
||||
View default project quotas
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Log in to the dashboard and select the :guilabel:`admin` project
|
||||
from the drop-down list.
|
||||
|
||||
#. On the :guilabel:`Admin` tab, open the :guilabel:`System` tab
|
||||
and click the :guilabel:`Defaults` category.
|
||||
|
||||
#. The default quota values are displayed.
|
||||
|
||||
.. note::
|
||||
|
||||
You can sort the table by clicking on either the
|
||||
:guilabel:`Quota Name` or :guilabel:`Limit` column headers.
|
||||
|
||||
.. _dashboard_update_project_quotas:
|
||||
|
||||
Update project quotas
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Log in to the dashboard and select the :guilabel:`admin` project
|
||||
from the drop-down list.
|
||||
|
||||
#. On the :guilabel:`Admin` tab, open the :guilabel:`System` tab
|
||||
and click the :guilabel:`Defaults` category.
|
||||
|
||||
#. Click the :guilabel:`Update Defaults` button.
|
||||
|
||||
#. In the :guilabel:`Update Default Quotas` window,
|
||||
you can edit the default quota values.
|
||||
|
||||
#. Click the :guilabel:`Update Defaults` button.
|
||||
|
||||
.. note::
|
||||
|
||||
The dashboard does not show all possible project quotas.
|
||||
To view and update the quotas for a service, use its
|
||||
command-line client. See `OpenStack Administrator Guide
|
||||
<https://docs.openstack.org/admin-guide/cli-set-quotas.html>`_.
|
@ -1,41 +0,0 @@
|
||||
===========================
|
||||
View cloud usage statistics
|
||||
===========================
|
||||
|
||||
The Telemetry service provides user-level usage data for
|
||||
OpenStack-based clouds, which can be used for customer billing, system
|
||||
monitoring, or alerts. Data can be collected by notifications sent by
|
||||
existing OpenStack components (for example, usage events emitted from
|
||||
Compute) or by polling the infrastructure (for example, libvirt).
|
||||
|
||||
.. note::
|
||||
|
||||
You can only view metering statistics on the dashboard (available
|
||||
only to administrators).
|
||||
The Telemetry service must be set up and administered through the
|
||||
:command:`ceilometer` command-line interface (CLI).
|
||||
|
||||
For basic administration information, refer to the `Measure Cloud
|
||||
Resources <https://docs.openstack.org/user-guide/cli-ceilometer.html>`_
|
||||
chapter in the OpenStack End User Guide.
|
||||
|
||||
.. _dashboard-view-resource-stats:
|
||||
|
||||
View resource statistics
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Log in to the dashboard and select the :guilabel:`admin` project
|
||||
from the drop-down list.
|
||||
|
||||
#. On the :guilabel:`Admin` tab, click the :guilabel:`Resource Usage` category.
|
||||
|
||||
#. Click the:
|
||||
|
||||
* :guilabel:`Usage Report` tab to view a usage report per project
|
||||
by specifying the time period (or even use a calendar to define
|
||||
a date range).
|
||||
|
||||
* :guilabel:`Stats` tab to view a multi-series line chart with
|
||||
user-defined meters. You group by project, define the value type
|
||||
(min, max, avg, or sum), and specify the time period (or even use
|
||||
a calendar to define a date range).
|
@ -1,38 +0,0 @@
|
||||
=========
|
||||
Dashboard
|
||||
=========
|
||||
|
||||
The OpenStack Dashboard is a web-based interface that allows you to
|
||||
manage OpenStack resources and services. The Dashboard allows you to
|
||||
interact with the OpenStack Compute cloud controller using the OpenStack
|
||||
APIs. For more information about installing and configuring the
|
||||
Dashboard, see the `Installation Tutorials and Guides
|
||||
<https://docs.openstack.org/project-install-guide/ocata/>`__
|
||||
for your operating system.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
dashboard-customize-configure.rst
|
||||
dashboard-sessions.rst
|
||||
dashboard-manage-images.rst
|
||||
dashboard-admin-manage-roles.rst
|
||||
dashboard-manage-projects-and-users.rst
|
||||
dashboard-manage-instances.rst
|
||||
dashboard-manage-flavors.rst
|
||||
dashboard-manage-volumes.rst
|
||||
dashboard-manage-shares.rst
|
||||
dashboard-set-quotas.rst
|
||||
dashboard-manage-resources.rst
|
||||
dashboard-manage-host-aggregates.rst
|
||||
dashboard-admin-manage-stacks.rst
|
||||
|
||||
- To deploy the dashboard, see the `OpenStack dashboard documentation
|
||||
<https://docs.openstack.org/developer/horizon/topics/deployment.html>`__.
|
||||
- To launch instances with the dashboard as an end user, see the
|
||||
`Launch and manage instances
|
||||
<https://docs.openstack.org/user-guide/dashboard-launch-instances.html>`__.
|
||||
in the OpenStack End User Guide.
|
||||
- To create and manage ports, see the `Create and manage networks
|
||||
<https://docs.openstack.org/user-guide/dashboard-create-networks.html#create-a-port>`__
|
||||
section of the OpenStack End User Guide.
|
@ -1,495 +0,0 @@
|
||||
.. _database:
|
||||
|
||||
========
|
||||
Database
|
||||
========
|
||||
|
||||
The Database service provides database management features.
|
||||
|
||||
Introduction
|
||||
~~~~~~~~~~~~
|
||||
|
||||
The Database service provides scalable and reliable cloud
|
||||
provisioning functionality for both relational and non-relational
|
||||
database engines. Users can quickly and easily use database features
|
||||
without the burden of handling complex administrative tasks. Cloud
|
||||
users and database administrators can provision and manage multiple
|
||||
database instances as needed.
|
||||
|
||||
The Database service provides resource isolation at high performance
|
||||
levels, and automates complex administrative tasks such as deployment,
|
||||
configuration, patching, backups, restores, and monitoring.
|
||||
|
||||
You can modify various cluster characteristics by editing the
|
||||
``/etc/trove/trove.conf`` file. A comprehensive list of the Database
|
||||
service configuration options is described in the `Database service
|
||||
<https://docs.openstack.org/ocata/config-reference/database.html>`_
|
||||
chapter in the *Configuration Reference*.
|
||||
|
||||
Create a data store
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
An administrative user can create data stores for a variety of
|
||||
databases.
|
||||
|
||||
This section assumes you do not yet have a MySQL data store, and shows
|
||||
you how to create a MySQL data store and populate it with a MySQL 5.5
|
||||
data store version.
|
||||
|
||||
|
||||
**To create a data store**
|
||||
|
||||
#. **Create a trove image**
|
||||
|
||||
Create an image for the type of database you want to use, for
|
||||
example, MySQL, MongoDB, Cassandra.
|
||||
|
||||
This image must have the trove guest agent installed, and it must
|
||||
have the ``trove-guestagent.conf`` file configured to connect to
|
||||
your OpenStack environment. To configure ``trove-guestagent.conf``,
|
||||
add the following lines to ``trove-guestagent.conf`` on the guest
|
||||
instance you are using to build your image:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
rabbit_host = controller
|
||||
rabbit_password = RABBIT_PASS
|
||||
nova_proxy_admin_user = admin
|
||||
nova_proxy_admin_pass = ADMIN_PASS
|
||||
nova_proxy_admin_tenant_name = service
|
||||
trove_auth_url = http://controller:35357/v2.0
|
||||
|
||||
This example assumes you have created a MySQL 5.5 image called
|
||||
``mysql-5.5.qcow2``.
|
||||
|
||||
.. important::
|
||||
|
||||
If you have a guest image that was created with an OpenStack version
|
||||
before Kilo, modify the guest agent init script for the guest image to
|
||||
read the configuration files from the directory ``/etc/trove/conf.d``.
|
||||
|
||||
For a backwards compatibility with pre-Kilo guest instances, set the
|
||||
database service configuration options ``injected_config_location`` to
|
||||
``/etc/trove`` and ``guest_info`` to ``/etc/guest_info``.
|
||||
|
||||
#. **Register image with Image service**
|
||||
|
||||
You need to register your guest image with the Image service.
|
||||
|
||||
In this example, you use the :command:`openstack image create`
|
||||
command to register a ``mysql-5.5.qcow2`` image.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack image create mysql-5.5 --disk-format qcow2 --container-format bare --public < mysql-5.5.qcow2
|
||||
+------------------+------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+------------------+------------------------------------------------------+
|
||||
| checksum | 133eae9fb1c98f45894a4e60d8736619 |
|
||||
| container_format | bare |
|
||||
| created_at | 2016-12-21T12:10:02Z |
|
||||
| disk_format | qcow2 |
|
||||
| file | /v2/images/d1afb4f0-2360-4400-8d97-846b1ab6af52/file |
|
||||
| id | d1afb4f0-2360-4400-8d97-846b1ab6af52 |
|
||||
| min_disk | 0 |
|
||||
| min_ram | 0 |
|
||||
| name | mysql-5.5 |
|
||||
| owner | 5669caad86a04256994cdf755df4d3c1 |
|
||||
| protected | False |
|
||||
| schema | /v2/schemas/image |
|
||||
| size | 13200896 |
|
||||
| status | active |
|
||||
| tags | |
|
||||
| updated_at | 2016-12-21T12:10:03Z |
|
||||
| virtual_size | None |
|
||||
| visibility | public |
|
||||
+------------------+------------------------------------------------------+
|
||||
|
||||
#. **Create the data store**
|
||||
|
||||
Create the data store that will house the new image. To do this, use
|
||||
the :command:`trove-manage` :command:`datastore_update` command.
|
||||
|
||||
This example uses the following arguments:
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 20 20 20
|
||||
|
||||
* - Argument
|
||||
- Description
|
||||
- In this example:
|
||||
* - config file
|
||||
- The configuration file to use.
|
||||
- ``--config-file=/etc/trove/trove.conf``
|
||||
* - name
|
||||
- Name you want to use for this data store.
|
||||
- ``mysql``
|
||||
* - default version
|
||||
- You can attach multiple versions/images to a data store. For
|
||||
example, you might have a MySQL 5.5 version and a MySQL 5.6
|
||||
version. You can designate one version as the default, which
|
||||
the system uses if a user does not explicitly request a
|
||||
specific version.
|
||||
- ``""``
|
||||
|
||||
At this point, you do not yet have a default version, so pass
|
||||
in an empty string.
|
||||
|
||||
|
|
||||
|
||||
Example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ trove-manage --config-file=/etc/trove/trove.conf datastore_update mysql ""
|
||||
|
||||
#. **Add a version to the new data store**
|
||||
|
||||
Now that you have a MySQL data store, you can add a version to it,
|
||||
using the :command:`trove-manage` :command:`datastore_version_update`
|
||||
command. The version indicates which guest image to use.
|
||||
|
||||
This example uses the following arguments:
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 20 20 20
|
||||
|
||||
* - Argument
|
||||
- Description
|
||||
- In this example:
|
||||
|
||||
* - config file
|
||||
- The configuration file to use.
|
||||
- ``--config-file=/etc/trove/trove.conf``
|
||||
|
||||
* - data store
|
||||
- The name of the data store you just created via
|
||||
``trove-manage`` :command:`datastore_update`.
|
||||
- ``mysql``
|
||||
|
||||
* - version name
|
||||
- The name of the version you are adding to the data store.
|
||||
- ``mysql-5.5``
|
||||
|
||||
* - data store manager
|
||||
- Which data store manager to use for this version. Typically,
|
||||
the data store manager is identified by one of the following
|
||||
strings, depending on the database:
|
||||
|
||||
* cassandra
|
||||
* couchbase
|
||||
* couchdb
|
||||
* db2
|
||||
* mariadb
|
||||
* mongodb
|
||||
* mysql
|
||||
* percona
|
||||
* postgresql
|
||||
* pxc
|
||||
* redis
|
||||
* vertica
|
||||
- ``mysql``
|
||||
|
||||
* - glance ID
|
||||
- The ID of the guest image you just added to the Image
|
||||
service. You can get this ID by using the glance
|
||||
:command:`image-show` IMAGE_NAME command.
|
||||
- bb75f870-0c33-4907-8467-1367f8cb15b6
|
||||
|
||||
* - packages
|
||||
- If you want to put additional packages on each guest that
|
||||
you create with this data store version, you can list the
|
||||
package names here.
|
||||
- ``""``
|
||||
|
||||
In this example, the guest image already contains all the
|
||||
required packages, so leave this argument empty.
|
||||
|
||||
* - active
|
||||
- Set this to either 1 or 0:
|
||||
* ``1`` = active
|
||||
* ``0`` = disabled
|
||||
- 1
|
||||
|
||||
|
|
||||
|
||||
Example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ trove-manage --config-file=/etc/trove/trove.conf datastore_version_update mysql mysql-5.5 mysql GLANCE_ID "" 1
|
||||
|
||||
**Optional.** Set your new version as the default version. To do
|
||||
this, use the :command:`trove-manage` :command:`datastore_update`
|
||||
command again, this time specifying the version you just created.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ trove-manage --config-file=/etc/trove/trove.conf datastore_update mysql mysql-5.5
|
||||
|
||||
#. **Load validation rules for configuration groups**
|
||||
|
||||
.. note::
|
||||
|
||||
**Applies only to MySQL and Percona data stores**
|
||||
|
||||
* If you just created a MySQL or Percona data store, then you need
|
||||
to load the appropriate validation rules, as described in this
|
||||
step.
|
||||
* If you just created a different data store, skip this step.
|
||||
|
||||
**Background.** You can manage database configuration tasks by using
|
||||
configuration groups. Configuration groups let you set configuration
|
||||
parameters, in bulk, on one or more databases.
|
||||
|
||||
When you set up a configuration group using the trove
|
||||
:command:`configuration-create` command, this command compares the configuration
|
||||
values you are setting against a list of valid configuration values
|
||||
that are stored in the ``validation-rules.json`` file.
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 20 20 20
|
||||
|
||||
* - Operating System
|
||||
- Location of :file:`validation-rules.json`
|
||||
- Notes
|
||||
|
||||
* - Ubuntu 14.04
|
||||
- :file:`/usr/lib/python2.7/dist-packages/trove/templates/DATASTORE_NAME`
|
||||
- DATASTORE_NAME is the name of either the MySQL data store or
|
||||
the Percona data store. This is typically either ``mysql``
|
||||
or ``percona``.
|
||||
|
||||
* - RHEL 7, CentOS 7, Fedora 20, and Fedora 21
|
||||
- :file:`/usr/lib/python2.7/site-packages/trove/templates/DATASTORE_NAME`
|
||||
- DATASTORE_NAME is the name of either the MySQL data store or
|
||||
the Percona data store. This is typically either ``mysql`` or ``percona``.
|
||||
|
||||
|
|
||||
|
||||
Therefore, as part of creating a data store, you need to load the
|
||||
``validation-rules.json`` file, using the :command:`trove-manage`
|
||||
:command:`db_load_datastore_config_parameters` command. This command
|
||||
takes the following arguments:
|
||||
|
||||
* Data store name
|
||||
* Data store version
|
||||
* Full path to the ``validation-rules.json`` file
|
||||
|
||||
|
|
||||
|
||||
This example loads the ``validation-rules.json`` file for a MySQL
|
||||
database on Ubuntu 14.04:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ trove-manage db_load_datastore_config_parameters mysql mysql-5.5 /usr/lib/python2.7/dist-packages/trove/templates/mysql/validation-rules.json
|
||||
|
||||
#. **Validate data store**
|
||||
|
||||
To validate your new data store and version, start by listing the
|
||||
data stores on your system:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ trove datastore-list
|
||||
+--------------------------------------+--------------+
|
||||
| id | name |
|
||||
+--------------------------------------+--------------+
|
||||
| 10000000-0000-0000-0000-000000000001 | Legacy MySQL |
|
||||
| e5dc1da3-f080-4589-a4c2-eff7928f969a | mysql |
|
||||
+--------------------------------------+--------------+
|
||||
|
||||
Take the ID of the MySQL data store and pass it in with the
|
||||
:command:`datastore-version-list` command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ trove datastore-version-list DATASTORE_ID
|
||||
+--------------------------------------+-----------+
|
||||
| id | name |
|
||||
+--------------------------------------+-----------+
|
||||
| 36a6306b-efd8-4d83-9b75-8b30dd756381 | mysql-5.5 |
|
||||
+--------------------------------------+-----------+
|
||||
|
||||
Data store classifications
|
||||
--------------------------
|
||||
|
||||
The Database service supports a variety of both relational and
|
||||
non-relational database engines, but to a varying degree of support for
|
||||
each :term:`data store`. The Database service project has defined
|
||||
several classifications that indicate the quality of support for each
|
||||
data store. Data stores also implement different extensions.
|
||||
An extension is called a :term:`strategy` and is classified similar to
|
||||
data stores.
|
||||
|
||||
Valid classifications for a data store and a strategy are:
|
||||
|
||||
* Experimental
|
||||
|
||||
* Technical preview
|
||||
|
||||
* Stable
|
||||
|
||||
Each classification builds on the previous one. This means that a data store
|
||||
that meets the ``technical preview`` requirements must also meet all the
|
||||
requirements for ``experimental``, and a data store that meets the ``stable``
|
||||
requirements must also meet all the requirements for ``technical preview``.
|
||||
|
||||
**Requirements**
|
||||
|
||||
* Experimental
|
||||
|
||||
A data store is considered to be ``experimental`` if it meets these criteria:
|
||||
|
||||
* It implements a basic subset of the Database service API including
|
||||
``create`` and ``delete``.
|
||||
|
||||
* It has guest agent elements that allow guest agent creation.
|
||||
|
||||
* It has a definition of supported operating systems.
|
||||
|
||||
* It meets the other
|
||||
`Documented Technical Requirements <https://specs.openstack.org/openstack/trove-specs/specs/kilo/experimental-datastores.html#requirements>`_.
|
||||
|
||||
A strategy is considered ``experimental`` if:
|
||||
|
||||
* It meets the
|
||||
`Documented Technical Requirements <https://specs.openstack.org/openstack/trove-specs/specs/kilo/experimental-datastores.html#requirements>`_.
|
||||
|
||||
* Technical preview
|
||||
|
||||
A data store is considered to be a ``technical preview`` if it meets the
|
||||
requirements of ``experimental`` and further:
|
||||
|
||||
* It implements APIs required to plant and start the capabilities of the
|
||||
data store as defined in the
|
||||
`Datastore Compatibility Matrix <https://wiki.openstack.org/wiki/Trove/DatastoreCompatibilityMatrix>`_.
|
||||
|
||||
.. note::
|
||||
|
||||
It is not required that the data store implements all features like
|
||||
resize, backup, replication, or clustering to meet this classification.
|
||||
|
||||
* It provides a mechanism for building a guest image that allows you to
|
||||
exercise its capabilities.
|
||||
|
||||
* It meets the other
|
||||
`Documented Technical Requirements <https://specs.openstack.org/openstack/trove-specs/specs/kilo/experimental-datastores.html#requirements>`_.
|
||||
|
||||
.. important::
|
||||
|
||||
A strategy is not normally considered to be ``technical
|
||||
preview``.
|
||||
|
||||
* Stable
|
||||
|
||||
A data store or a strategy is considered ``stable`` if:
|
||||
|
||||
* It meets the requirements of ``technical preview``.
|
||||
|
||||
* It meets the other
|
||||
`Documented Technical Requirements <https://specs.openstack.org/openstack/trove-specs/specs/kilo/experimental-datastores.html#requirements>`_.
|
||||
|
||||
**Initial Classifications**
|
||||
|
||||
The following table shows the current classification assignments for the
|
||||
different data stores.
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 30 30
|
||||
|
||||
* - Classification
|
||||
- Data store
|
||||
* - Stable
|
||||
- MySQL
|
||||
* - Technical Preview
|
||||
- Cassandra, MongoDB
|
||||
* - Experimental
|
||||
- All others
|
||||
|
||||
Redis data store replication
|
||||
----------------------------
|
||||
|
||||
Replication strategies are available for Redis with
|
||||
several commands located in the Redis data store
|
||||
manager:
|
||||
|
||||
- :command:`create`
|
||||
- :command:`detach-replica`
|
||||
- :command:`eject-replica-source`
|
||||
- :command:`promote-to-replica-source`
|
||||
|
||||
Additional arguments for the :command:`create` command
|
||||
include :command:`--replica_of` and
|
||||
:command:`--replica_count`.
|
||||
|
||||
Redis integration and unit tests
|
||||
--------------------------------
|
||||
|
||||
Unit tests and integration tests are also available for
|
||||
Redis.
|
||||
|
||||
#. Install trovestack:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ./trovestack install
|
||||
|
||||
.. note::
|
||||
|
||||
Trovestack is a development script used for integration
|
||||
testing and Database service development installations.
|
||||
Do not use Trovestack in a production environment. For
|
||||
more information, see `the Database service
|
||||
developer docs <https://docs.openstack.org/developer/trove/dev/install.html#running-trovestack-to-setup-trove>`_
|
||||
|
||||
#. Start Redis:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ./trovestack kick-start redis
|
||||
|
||||
#. Run integration tests:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ./trovestack int-tests --group=replication
|
||||
|
||||
You can run :command:`--group=redis_supported`
|
||||
instead of :command:`--group=replication` if needed.
|
||||
|
||||
Configure a cluster
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
An administrative user can configure various characteristics of a
|
||||
MongoDB cluster.
|
||||
|
||||
**Query routers and config servers**
|
||||
|
||||
**Background.** Each cluster includes at least one query router and
|
||||
one config server. Query routers and config servers count against your
|
||||
quota. When you delete a cluster, the system deletes the associated
|
||||
query router(s) and config server(s).
|
||||
|
||||
**Configuration.** By default, the system creates one query router and
|
||||
one config server per cluster. You can change this by editing
|
||||
the ``/etc/trove/trove.conf`` file. These settings are in the
|
||||
``mongodb`` section of the file:
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 30 30
|
||||
|
||||
* - Setting
|
||||
- Valid values are:
|
||||
|
||||
* - num_config_servers_per_cluster
|
||||
- 1 or 3
|
||||
|
||||
* - num_query_routers_per_cluster
|
||||
- 1 or 3
|
@ -1,74 +0,0 @@
|
||||
Authentication middleware with user name and password
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can also configure Identity authentication middleware using the
|
||||
``admin_user`` and ``admin_password`` options.
|
||||
|
||||
.. note::
|
||||
|
||||
The ``admin_token`` option is deprecated and no longer used for
|
||||
configuring auth_token middleware.
|
||||
|
||||
For services that have a separate paste-deploy ``.ini`` file, you can
|
||||
configure the authentication middleware in the ``[keystone_authtoken]``
|
||||
section of the main configuration file, such as ``nova.conf``. In
|
||||
Compute, for example, you can remove the middleware parameters from
|
||||
``api-paste.ini``, as follows:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[filter:authtoken]
|
||||
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
|
||||
|
||||
|
||||
And set the following values in ``nova.conf`` as follows:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
auth_strategy=keystone
|
||||
|
||||
[keystone_authtoken]
|
||||
auth_uri = http://controller:5000/v2.0
|
||||
identity_uri = http://controller:35357
|
||||
admin_user = admin
|
||||
admin_password = SuperSekretPassword
|
||||
admin_tenant_name = service
|
||||
|
||||
.. note::
|
||||
|
||||
The middleware parameters in the paste config take priority. You
|
||||
must remove them to use the values in the ``[keystone_authtoken]``
|
||||
section.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out any ``auth_host``, ``auth_port``, and
|
||||
``auth_protocol`` options because the ``identity_uri`` option
|
||||
replaces them.
|
||||
|
||||
This sample paste config filter makes use of the ``admin_user`` and
|
||||
``admin_password`` options:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[filter:authtoken]
|
||||
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
|
||||
auth_uri = http://controller:5000/v2.0
|
||||
identity_uri = http://controller:35357
|
||||
auth_token = 012345SECRET99TOKEN012345
|
||||
admin_user = admin
|
||||
admin_password = keystone123
|
||||
|
||||
.. note::
|
||||
|
||||
Using this option requires an admin project/role relationship. The
|
||||
admin user is granted access to the admin role on the admin project.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out any ``auth_host``, ``auth_port``, and
|
||||
``auth_protocol`` options because the ``identity_uri`` option
|
||||
replaces them.
|
||||
|
@ -1,128 +0,0 @@
|
||||
.. :orphan:
|
||||
|
||||
Caching layer
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
OpenStack Identity supports a caching layer that is above the
|
||||
configurable subsystems (for example, token). OpenStack Identity uses the
|
||||
`oslo.cache <https://docs.openstack.org/developer/oslo.cache/>`__
|
||||
library which allows flexible cache back ends. The majority of the
|
||||
caching configuration options are set in the ``[cache]`` section of the
|
||||
``/etc/keystone/keystone.conf`` file. However, each section that has
|
||||
the capability to be cached usually has a caching boolean value that
|
||||
toggles caching.
|
||||
|
||||
So to enable only the token back end caching, set the values as follows:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[cache]
|
||||
enabled=true
|
||||
|
||||
[catalog]
|
||||
caching=false
|
||||
|
||||
[domain_config]
|
||||
caching=false
|
||||
|
||||
[federation]
|
||||
caching=false
|
||||
|
||||
[resource]
|
||||
caching=false
|
||||
|
||||
[revoke]
|
||||
caching=false
|
||||
|
||||
[role]
|
||||
caching=false
|
||||
|
||||
[token]
|
||||
caching=true
|
||||
|
||||
.. note::
|
||||
|
||||
Since the Newton release, the default setting is enabled for subsystem
|
||||
caching and the global toggle. As a result, all subsystems that support
|
||||
caching are doing this by default.
|
||||
|
||||
Caching for tokens and tokens validation
|
||||
----------------------------------------
|
||||
|
||||
All types of tokens benefit from caching, including Fernet tokens. Although
|
||||
Fernet tokens do not need to be persisted, they should still be cached for
|
||||
optimal token validation performance.
|
||||
|
||||
The token system has a separate ``cache_time`` configuration option,
|
||||
that can be set to a value above or below the global ``expiration_time``
|
||||
default, allowing for different caching behavior from the other systems
|
||||
in OpenStack Identity. This option is set in the ``[token]`` section of
|
||||
the configuration file.
|
||||
|
||||
The token revocation list cache time is handled by the configuration
|
||||
option ``revocation_cache_time`` in the ``[token]`` section. The
|
||||
revocation list is refreshed whenever a token is revoked. It typically
|
||||
sees significantly more requests than specific token retrievals or token
|
||||
validation calls.
|
||||
|
||||
Here is a list of actions that are affected by the cached time: getting
|
||||
a new token, revoking tokens, validating tokens, checking v2 tokens, and
|
||||
checking v3 tokens.
|
||||
|
||||
The delete token API calls invalidate the cache for the tokens being
|
||||
acted upon, as well as invalidating the cache for the revoked token list
|
||||
and the validate/check token calls.
|
||||
|
||||
Token caching is configurable independently of the ``revocation_list``
|
||||
caching. Lifted expiration checks from the token drivers to the token
|
||||
manager. This ensures that cached tokens will still raise a
|
||||
``TokenNotFound`` flag when expired.
|
||||
|
||||
For cache consistency, all token IDs are transformed into the short
|
||||
token hash at the provider and token driver level. Some methods have
|
||||
access to the full ID (PKI Tokens), and some methods do not. Cache
|
||||
invalidation is inconsistent without token ID normalization.
|
||||
|
||||
Caching for non-token resources
|
||||
-------------------------------
|
||||
|
||||
Various other keystone components have a separate ``cache_time`` configuration
|
||||
option, that can be set to a value above or below the global
|
||||
``expiration_time`` default, allowing for different caching behavior
|
||||
from the other systems in Identity service. This option can be set in various
|
||||
sections (for example, ``[role]`` and ``[resource]``) of the configuration
|
||||
file.
|
||||
The create, update, and delete actions for domains, projects and roles
|
||||
will perform proper invalidations of the cached methods listed above.
|
||||
|
||||
For more information about the different back ends (and configuration
|
||||
options), see:
|
||||
|
||||
- `dogpile.cache.memory <https://dogpilecache.readthedocs.io/en/latest/api.html#memory-backend>`__
|
||||
|
||||
- `dogpile.cache.memcached <https://dogpilecache.readthedocs.io/en/latest/api.html#memcached-backends>`__
|
||||
|
||||
.. note::
|
||||
|
||||
The memory back end is not suitable for use in a production
|
||||
environment.
|
||||
|
||||
- `dogpile.cache.redis <https://dogpilecache.readthedocs.io/en/latest/api.html#redis-backends>`__
|
||||
|
||||
- `dogpile.cache.dbm <https://dogpilecache.readthedocs.io/en/latest/api.html#file-backends>`__
|
||||
|
||||
Configure the Memcached back end example
|
||||
----------------------------------------
|
||||
|
||||
The following example shows how to configure the memcached back end:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[cache]
|
||||
|
||||
enabled = true
|
||||
backend = dogpile.cache.memcached
|
||||
backend_argument = url:127.0.0.1:11211
|
||||
|
||||
You need to specify the URL to reach the ``memcached`` instance with the
|
||||
``backend_argument`` parameter.
|
@ -1,237 +0,0 @@
|
||||
====================
|
||||
Certificates for PKI
|
||||
====================
|
||||
|
||||
PKI stands for Public Key Infrastructure. Tokens are documents,
|
||||
cryptographically signed using the X509 standard. In order to work
|
||||
correctly token generation requires a public/private key pair. The
|
||||
public key must be signed in an X509 certificate, and the certificate
|
||||
used to sign it must be available as a :term:`Certificate Authority (CA)`
|
||||
certificate. These files can be generated either using the
|
||||
:command:`keystone-manage` utility, or externally generated. The files need to
|
||||
be in the locations specified by the top level Identity service
|
||||
configuration file ``/etc/keystone/keystone.conf`` as specified in the
|
||||
above section. Additionally, the private key should only be readable by
|
||||
the system user that will run the Identity service.
|
||||
|
||||
|
||||
.. warning::
|
||||
|
||||
The certificates can be world readable, but the private key cannot
|
||||
be. The private key should only be readable by the account that is
|
||||
going to sign tokens. When generating files with the
|
||||
:command:`keystone-manage pki_setup` command, your best option is to run
|
||||
as the pki user. If you run :command:`keystone-manage` as root, you can
|
||||
append ``--keystone-user`` and ``--keystone-group`` parameters
|
||||
to set the user name and group keystone is going to run under.
|
||||
|
||||
The values that specify where to read the certificates are under the
|
||||
``[signing]`` section of the configuration file. The configuration
|
||||
values are:
|
||||
|
||||
- ``certfile``
|
||||
Location of certificate used to verify tokens. Default is
|
||||
``/etc/keystone/ssl/certs/signing_cert.pem``.
|
||||
|
||||
- ``keyfile``
|
||||
Location of private key used to sign tokens. Default is
|
||||
``/etc/keystone/ssl/private/signing_key.pem``.
|
||||
|
||||
- ``ca_certs``
|
||||
Location of certificate for the authority that issued
|
||||
the above certificate. Default is
|
||||
``/etc/keystone/ssl/certs/ca.pem``.
|
||||
|
||||
- ``ca_key``
|
||||
Location of the private key used by the CA. Default is
|
||||
``/etc/keystone/ssl/private/cakey.pem``.
|
||||
|
||||
- ``key_size``
|
||||
Default is ``2048``.
|
||||
|
||||
- ``valid_days``
|
||||
Default is ``3650``.
|
||||
|
||||
- ``cert_subject``
|
||||
Certificate subject (auto generated certificate) for token signing.
|
||||
Default is ``/C=US/ST=Unset/L=Unset/O=Unset/CN=www.example.com``.
|
||||
|
||||
When generating certificates with the :command:`keystone-manage pki_setup`
|
||||
command, the ``ca_key``, ``key_size``, and ``valid_days`` configuration
|
||||
options are used.
|
||||
|
||||
If the :command:`keystone-manage pki_setup` command is not used to generate
|
||||
certificates, or you are providing your own certificates, these values
|
||||
do not need to be set.
|
||||
|
||||
If ``provider=keystone.token.providers.uuid.Provider`` in the
|
||||
``[token]`` section of the keystone configuration file, a typical token
|
||||
looks like ``53f7f6ef0cc344b5be706bcc8b1479e1``. If
|
||||
``provider=keystone.token.providers.pki.Provider``, a typical token is a
|
||||
much longer string, such as::
|
||||
|
||||
MIIKtgYJKoZIhvcNAQcCoIIKpzCCCqMCAQExCTAHBgUrDgMCGjCCCY8GCSqGSIb3DQEHAaCCCYAEggl8eyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxMy0wNS0z
|
||||
MFQxNTo1MjowNi43MzMxOTgiLCAiZXhwaXJlcyI6ICIyMDEzLTA1LTMxVDE1OjUyOjA2WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogbnVs
|
||||
bCwgImVuYWJsZWQiOiB0cnVlLCAiaWQiOiAiYzJjNTliNGQzZDI4NGQ4ZmEwOWYxNjljYjE4MDBlMDYiLCAibmFtZSI6ICJkZW1vIn19LCAic2VydmljZUNhdGFsb2ciOiBbeyJlbmRw
|
||||
b2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTkyLjE2OC4yNy4xMDA6ODc3NC92Mi9jMmM1OWI0ZDNkMjg0ZDhmYTA5ZjE2OWNiMTgwMGUwNiIsICJyZWdpb24iOiAiUmVnaW9u
|
||||
T25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDo4Nzc0L3YyL2MyYzU5YjRkM2QyODRkOGZhMDlmMTY5Y2IxODAwZTA2IiwgImlkIjogIjFmYjMzYmM5M2Y5
|
||||
ODRhNGNhZTk3MmViNzcwOTgzZTJlIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTkyLjE2OC4yNy4xMDA6ODc3NC92Mi9jMmM1OWI0ZDNkMjg0ZDhmYTA5ZjE2OWNiMTgwMGUwNiJ9XSwg
|
||||
ImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJjb21wdXRlIiwgIm5hbWUiOiAibm92YSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3
|
||||
LjEwMDozMzMzIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzE5Mi4xNjguMjcuMTAwOjMzMzMiLCAiaWQiOiAiN2JjMThjYzk1NWFiNDNkYjhm
|
||||
MGU2YWNlNDU4NjZmMzAiLCAicHVibGljVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDozMzMzIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInMzIiwgIm5hbWUi
|
||||
OiAiczMifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTkyLjE2OC4yNy4xMDA6OTI5MiIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjog
|
||||
Imh0dHA6Ly8xOTIuMTY4LjI3LjEwMDo5MjkyIiwgImlkIjogIjczODQzNTJhNTQ0MjQ1NzVhM2NkOTVkN2E0YzNjZGY1IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTkyLjE2OC4yNy4x
|
||||
MDA6OTI5MiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpbWFnZSIsICJuYW1lIjogImdsYW5jZSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6
|
||||
Ly8xOTIuMTY4LjI3LjEwMDo4Nzc2L3YxL2MyYzU5YjRkM2QyODRkOGZhMDlmMTY5Y2IxODAwZTA2IiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDov
|
||||
LzE5Mi4xNjguMjcuMTAwOjg3NzYvdjEvYzJjNTliNGQzZDI4NGQ4ZmEwOWYxNjljYjE4MDBlMDYiLCAiaWQiOiAiMzQ3ZWQ2ZThjMjkxNGU1MGFlMmJiNjA2YWQxNDdjNTQiLCAicHVi
|
||||
bGljVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDo4Nzc2L3YxL2MyYzU5YjRkM2QyODRkOGZhMDlmMTY5Y2IxODAwZTA2In1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBl
|
||||
IjogInZvbHVtZSIsICJuYW1lIjogImNpbmRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDo4NzczL3NlcnZpY2VzL0FkbWluIiwg
|
||||
InJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzE5Mi4xNjguMjcuMTAwOjg3NzMvc2VydmljZXMvQ2xvdWQiLCAiaWQiOiAiMmIwZGMyYjNlY2U4NGJj
|
||||
YWE1NDAzMDMzNzI5YzY3MjIiLCAicHVibGljVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDo4NzczL3NlcnZpY2VzL0Nsb3VkIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0
|
||||
eXBlIjogImVjMiIsICJuYW1lIjogImVjMiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDozNTM1Ny92Mi4wIiwgInJlZ2lvbiI6ICJS
|
||||
ZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzE5Mi4xNjguMjcuMTAwOjUwMDAvdjIuMCIsICJpZCI6ICJiNTY2Y2JlZjA2NjQ0ZmY2OWMyOTMxNzY2Yjc5MTIyOSIsICJw
|
||||
dWJsaWNVUkwiOiAiaHR0cDovLzE5Mi4xNjguMjcuMTAwOjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0
|
||||
b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAiZGVtbyIsICJyb2xlc19saW5rcyI6IFtdLCAiaWQiOiAiZTVhMTM3NGE4YTRmNDI4NWIzYWQ3MzQ1MWU2MDY4YjEiLCAicm9sZXMi
|
||||
OiBbeyJuYW1lIjogImFub3RoZXJyb2xlIn0sIHsibmFtZSI6ICJNZW1iZXIifV0sICJuYW1lIjogImRlbW8ifSwgIm1ldGFkYXRhIjogeyJpc19hZG1pbiI6IDAsICJyb2xlcyI6IFsi
|
||||
YWRiODM3NDVkYzQzNGJhMzk5ODllNjBjOTIzYWZhMjgiLCAiMzM2ZTFiNjE1N2Y3NGFmZGJhNWUwYTYwMWUwNjM5MmYiXX19fTGB-zCB-AIBATBcMFcxCzAJBgNVBAYTAlVTMQ4wDAYD
|
||||
VQQIEwVVbnNldDEOMAwGA1UEBxMFVW5zZXQxDjAMBgNVBAoTBVVuc2V0MRgwFgYDVQQDEw93d3cuZXhhbXBsZS5jb20CAQEwBwYFKw4DAhowDQYJKoZIhvcNAQEBBQAEgYCAHLpsEs2R
|
||||
nouriuiCgFayIqCssK3SVdhOMINiuJtqv0sE-wBDFiEj-Prcudqlz-n+6q7VgV4mwMPszz39-rwp+P5l4AjrJasUm7FrO-4l02tPLaaZXU1gBQ1jUG5e5aL5jPDP08HbCWuX6wr-QQQB
|
||||
SrWY8lF3HrTcJT23sZIleg==
|
||||
|
||||
Sign certificate issued by external CA
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can use a signing certificate issued by an external CA instead of
|
||||
generated by :command:`keystone-manage`. However, a certificate issued by an
|
||||
external CA must satisfy the following conditions:
|
||||
|
||||
- All certificate and key files must be in Privacy Enhanced Mail (PEM)
|
||||
format
|
||||
|
||||
- Private key files must not be protected by a password
|
||||
|
||||
When using a signing certificate issued by an external CA, you do not
|
||||
need to specify ``key_size``, ``valid_days``, and ``ca_password`` as
|
||||
they will be ignored.
|
||||
|
||||
The basic workflow for using a signing certificate issued by an external
|
||||
CA involves:
|
||||
|
||||
#. Request Signing Certificate from External CA
|
||||
|
||||
#. Convert certificate and private key to PEM if needed
|
||||
|
||||
#. Install External Signing Certificate
|
||||
|
||||
Request a signing certificate from an external CA
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
One way to request a signing certificate from an external CA is to first
|
||||
generate a PKCS #10 Certificate Request Syntax (CRS) using OpenSSL CLI.
|
||||
|
||||
Create a certificate request configuration file. For example, create the
|
||||
``cert_req.conf`` file, as follows:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ req ]
|
||||
default_bits = 4096
|
||||
default_keyfile = keystonekey.pem
|
||||
default_md = sha256
|
||||
|
||||
prompt = no
|
||||
distinguished_name = distinguished_name
|
||||
|
||||
[ distinguished_name ]
|
||||
countryName = US
|
||||
stateOrProvinceName = CA
|
||||
localityName = Sunnyvale
|
||||
organizationName = OpenStack
|
||||
organizationalUnitName = Keystone
|
||||
commonName = Keystone Signing
|
||||
emailAddress = keystone@openstack.org
|
||||
|
||||
Then generate a CRS with OpenSSL CLI. **Do not encrypt the generated
|
||||
private key. You must use the -nodes option.**
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openssl req -newkey rsa:1024 -keyout signing_key.pem -keyform PEM \
|
||||
-out signing_cert_req.pem -outform PEM -config cert_req.conf -nodes
|
||||
|
||||
If everything is successful, you should end up with
|
||||
``signing_cert_req.pem`` and ``signing_key.pem``. Send
|
||||
``signing_cert_req.pem`` to your CA to request a token signing certificate
|
||||
and make sure to ask the certificate to be in PEM format. Also, make sure your
|
||||
trusted CA certificate chain is also in PEM format.
|
||||
|
||||
Install an external signing certificate
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Assuming you have the following already:
|
||||
|
||||
- ``signing_cert.pem``
|
||||
(Keystone token) signing certificate in PEM format
|
||||
|
||||
- ``signing_key.pem``
|
||||
Corresponding (non-encrypted) private key in PEM format
|
||||
|
||||
- ``cacert.pem``
|
||||
Trust CA certificate chain in PEM format
|
||||
|
||||
Copy the above to your certificate directory. For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# mkdir -p /etc/keystone/ssl/certs
|
||||
# cp signing_cert.pem /etc/keystone/ssl/certs/
|
||||
# cp signing_key.pem /etc/keystone/ssl/certs/
|
||||
# cp cacert.pem /etc/keystone/ssl/certs/
|
||||
# chmod -R 700 /etc/keystone/ssl/certs
|
||||
|
||||
.. note::
|
||||
|
||||
Make sure the certificate directory is only accessible by root.
|
||||
|
||||
.. note::
|
||||
|
||||
The procedure of copying the key and cert files may be improved if
|
||||
done after first running :command:`keystone-manage pki_setup` since this
|
||||
command also creates other needed files, such as the ``index.txt``
|
||||
and ``serial`` files.
|
||||
|
||||
Also, when copying the necessary files to a different server for
|
||||
replicating the functionality, the entire directory of files is
|
||||
needed, not just the key and cert files.
|
||||
|
||||
If your certificate directory path is different from the default
|
||||
``/etc/keystone/ssl/certs``, make sure it is reflected in the
|
||||
``[signing]`` section of the configuration file.
|
||||
|
||||
Switching out expired signing certificates
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following procedure details how to switch out expired signing
|
||||
certificates with no cloud outages.
|
||||
|
||||
#. Generate a new signing key.
|
||||
|
||||
#. Generate a new certificate request.
|
||||
|
||||
#. Sign the new certificate with the existing CA to generate a new
|
||||
``signing_cert``.
|
||||
|
||||
#. Append the new ``signing_cert`` to the old ``signing_cert``. Ensure the
|
||||
old certificate is in the file first.
|
||||
|
||||
#. Remove all signing certificates from all your hosts to force OpenStack
|
||||
Compute to download the new ``signing_cert``.
|
||||
|
||||
#. Replace the old signing key with the new signing key. Move the new
|
||||
signing certificate above the old certificate in the ``signing_cert``
|
||||
file.
|
||||
|
||||
#. After the old certificate reads as expired, you can safely remove the
|
||||
old signing certificate from the file.
|
@ -1,354 +0,0 @@
|
||||
=================
|
||||
Identity concepts
|
||||
=================
|
||||
|
||||
Authentication
|
||||
The process of confirming the identity of a user. To confirm an incoming
|
||||
request, OpenStack Identity validates a set of credentials users
|
||||
supply. Initially, these credentials are a user name and password, or a
|
||||
user name and API key. When OpenStack Identity validates user credentials,
|
||||
it issues an authentication token. Users provide the token in
|
||||
subsequent requests.
|
||||
|
||||
Credentials
|
||||
Data that confirms the identity of the user. For example, user
|
||||
name and password, user name and API key, or an authentication
|
||||
token that the Identity service provides.
|
||||
|
||||
Domain
|
||||
An Identity service API v3 entity. Domains are a collection of
|
||||
projects and users that define administrative boundaries for
|
||||
managing Identity entities. Domains can represent an
|
||||
individual, company, or operator-owned space. They expose
|
||||
administrative activities directly to system users. Users can be
|
||||
granted the administrator role for a domain. A domain
|
||||
administrator can create projects, users, and groups in a domain
|
||||
and assign roles to users and groups in a domain.
|
||||
|
||||
Endpoint
|
||||
A network-accessible address, usually a URL, through which you can
|
||||
access a service. If you are using an extension for templates, you
|
||||
can create an endpoint template that represents the templates of
|
||||
all consumable services that are available across the regions.
|
||||
|
||||
Group
|
||||
An Identity service API v3 entity. Groups are a collection of
|
||||
users owned by a domain. A group role, granted to a domain
|
||||
or project, applies to all users in the group. Adding or removing
|
||||
users to or from a group grants or revokes their role and
|
||||
authentication to the associated domain or project.
|
||||
|
||||
OpenStackClient
|
||||
A command-line interface for several OpenStack services including
|
||||
the Identity API. For example, a user can run the
|
||||
:command:`openstack service create` and
|
||||
:command:`openstack endpoint create` commands to register services
|
||||
in their OpenStack installation.
|
||||
|
||||
Project
|
||||
A container that groups or isolates resources or identity objects.
|
||||
Depending on the service operator, a project might map to a
|
||||
customer, account, organization, or tenant.
|
||||
|
||||
Region
|
||||
An Identity service API v3 entity. Represents a general division
|
||||
in an OpenStack deployment. You can associate zero or more
|
||||
sub-regions with a region to make a tree-like structured hierarchy.
|
||||
Although a region does not have a geographical connotation, a
|
||||
deployment can use a geographical name for a region, such as ``us-east``.
|
||||
|
||||
Role
|
||||
A personality with a defined set of user rights and privileges to
|
||||
perform a specific set of operations. The Identity service issues
|
||||
a token to a user that includes a list of roles. When a user calls
|
||||
a service, that service interprets the user role set, and
|
||||
determines to which operations or resources each role grants
|
||||
access.
|
||||
|
||||
Service
|
||||
An OpenStack service, such as Compute (nova), Object Storage
|
||||
(swift), or Image service (glance), that provides one or more
|
||||
endpoints through which users can access resources and perform
|
||||
operations.
|
||||
|
||||
Token
|
||||
An alpha-numeric text string that enables access to OpenStack APIs
|
||||
and resources. A token may be revoked at any time and is valid for
|
||||
a finite duration. While OpenStack Identity supports token-based
|
||||
authentication in this release, it intends to support additional
|
||||
protocols in the future. OpenStack Identity is an integration
|
||||
service that does not aspire to be a full-fledged identity store
|
||||
and management solution.
|
||||
|
||||
User
|
||||
A digital representation of a person, system, or service that uses
|
||||
OpenStack cloud services. The Identity service validates that
|
||||
incoming requests are made by the user who claims to be making the
|
||||
call. Users have a login and can access resources by using
|
||||
assigned tokens. Users can be directly assigned to a particular
|
||||
project and behave as if they are contained in that project.
|
||||
|
||||
User management
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Identity user management examples:
|
||||
|
||||
* Create a user named ``alice``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack user create --password-prompt --email alice@example.com alice
|
||||
|
||||
* Create a project named ``acme``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack project create acme --domain default
|
||||
|
||||
* Create a domain named ``emea``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-identity-api-version=3 domain create emea
|
||||
|
||||
* Create a role named ``compute-user``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role create compute-user
|
||||
|
||||
.. note::
|
||||
|
||||
Individual services assign meaning to roles, typically through
|
||||
limiting or granting access to users with the role to the
|
||||
operations that the service supports. Role access is typically
|
||||
configured in the service's ``policy.json`` file. For example,
|
||||
to limit Compute access to the ``compute-user`` role, edit the
|
||||
Compute service's ``policy.json`` file to require this role for
|
||||
Compute operations.
|
||||
|
||||
The Identity service assigns a project and a role to a user. You might
|
||||
assign the ``compute-user`` role to the ``alice`` user in the ``acme``
|
||||
project:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role add --project acme --user alice compute-user
|
||||
|
||||
A user can have different roles in different projects. For example, Alice
|
||||
might also have the ``admin`` role in the ``Cyberdyne`` project. A user
|
||||
can also have multiple roles in the same project.
|
||||
|
||||
The ``/etc/[SERVICE_CODENAME]/policy.json`` file controls the
|
||||
tasks that users can perform for a given service. For example, the
|
||||
``/etc/nova/policy.json`` file specifies the access policy for the
|
||||
Compute service, the ``/etc/glance/policy.json`` file specifies
|
||||
the access policy for the Image service, and the
|
||||
``/etc/keystone/policy.json`` file specifies the access policy for
|
||||
the Identity service.
|
||||
|
||||
The default ``policy.json`` files in the Compute, Identity, and
|
||||
Image services recognize only the ``admin`` role. Any user with
|
||||
any role in a project can access all operations that do not require the
|
||||
``admin`` role.
|
||||
|
||||
To restrict users from performing operations in, for example, the
|
||||
Compute service, you must create a role in the Identity service and
|
||||
then modify the ``/etc/nova/policy.json`` file so that this role
|
||||
is required for Compute operations.
|
||||
|
||||
For example, the following line in the ``/etc/cinder/policy.json``
|
||||
file does not restrict which users can create volumes:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
"volume:create": "",
|
||||
|
||||
If the user has any role in a project, he can create volumes in that
|
||||
project.
|
||||
|
||||
To restrict the creation of volumes to users who have the
|
||||
``compute-user`` role in a particular project, you add ``"role:compute-user"``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
"volume:create": "role:compute-user",
|
||||
|
||||
To restrict all Compute service requests to require this role, the
|
||||
resulting file looks like:
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"admin_or_owner": "role:admin or project_id:%(project_id)s",
|
||||
"default": "rule:admin_or_owner",
|
||||
"compute:create": "role:compute-user",
|
||||
"compute:create:attach_network": "role:compute-user",
|
||||
"compute:create:attach_volume": "role:compute-user",
|
||||
"compute:get_all": "role:compute-user",
|
||||
"compute:unlock_override": "rule:admin_api",
|
||||
"admin_api": "role:admin",
|
||||
"compute_extension:accounts": "rule:admin_api",
|
||||
"compute_extension:admin_actions": "rule:admin_api",
|
||||
"compute_extension:admin_actions:pause": "rule:admin_or_owner",
|
||||
"compute_extension:admin_actions:unpause": "rule:admin_or_owner",
|
||||
"compute_extension:admin_actions:suspend": "rule:admin_or_owner",
|
||||
"compute_extension:admin_actions:resume": "rule:admin_or_owner",
|
||||
"compute_extension:admin_actions:lock": "rule:admin_or_owner",
|
||||
"compute_extension:admin_actions:unlock": "rule:admin_or_owner",
|
||||
"compute_extension:admin_actions:resetNetwork": "rule:admin_api",
|
||||
"compute_extension:admin_actions:injectNetworkInfo": "rule:admin_api",
|
||||
"compute_extension:admin_actions:createBackup": "rule:admin_or_owner",
|
||||
"compute_extension:admin_actions:migrateLive": "rule:admin_api",
|
||||
"compute_extension:admin_actions:migrate": "rule:admin_api",
|
||||
"compute_extension:aggregates": "rule:admin_api",
|
||||
"compute_extension:certificates": "role:compute-user",
|
||||
"compute_extension:cloudpipe": "rule:admin_api",
|
||||
"compute_extension:console_output": "role:compute-user",
|
||||
"compute_extension:consoles": "role:compute-user",
|
||||
"compute_extension:createserverext": "role:compute-user",
|
||||
"compute_extension:deferred_delete": "role:compute-user",
|
||||
"compute_extension:disk_config": "role:compute-user",
|
||||
"compute_extension:evacuate": "rule:admin_api",
|
||||
"compute_extension:extended_server_attributes": "rule:admin_api",
|
||||
"compute_extension:extended_status": "role:compute-user",
|
||||
"compute_extension:flavorextradata": "role:compute-user",
|
||||
"compute_extension:flavorextraspecs": "role:compute-user",
|
||||
"compute_extension:flavormanage": "rule:admin_api",
|
||||
"compute_extension:floating_ip_dns": "role:compute-user",
|
||||
"compute_extension:floating_ip_pools": "role:compute-user",
|
||||
"compute_extension:floating_ips": "role:compute-user",
|
||||
"compute_extension:hosts": "rule:admin_api",
|
||||
"compute_extension:keypairs": "role:compute-user",
|
||||
"compute_extension:multinic": "role:compute-user",
|
||||
"compute_extension:networks": "rule:admin_api",
|
||||
"compute_extension:quotas": "role:compute-user",
|
||||
"compute_extension:rescue": "role:compute-user",
|
||||
"compute_extension:security_groups": "role:compute-user",
|
||||
"compute_extension:server_action_list": "rule:admin_api",
|
||||
"compute_extension:server_diagnostics": "rule:admin_api",
|
||||
"compute_extension:simple_tenant_usage:show": "rule:admin_or_owner",
|
||||
"compute_extension:simple_tenant_usage:list": "rule:admin_api",
|
||||
"compute_extension:users": "rule:admin_api",
|
||||
"compute_extension:virtual_interfaces": "role:compute-user",
|
||||
"compute_extension:virtual_storage_arrays": "role:compute-user",
|
||||
"compute_extension:volumes": "role:compute-user",
|
||||
"compute_extension:volume_attachments:index": "role:compute-user",
|
||||
"compute_extension:volume_attachments:show": "role:compute-user",
|
||||
"compute_extension:volume_attachments:create": "role:compute-user",
|
||||
"compute_extension:volume_attachments:delete": "role:compute-user",
|
||||
"compute_extension:volumetypes": "role:compute-user",
|
||||
"volume:create": "role:compute-user",
|
||||
"volume:get_all": "role:compute-user",
|
||||
"volume:get_volume_metadata": "role:compute-user",
|
||||
"volume:get_snapshot": "role:compute-user",
|
||||
"volume:get_all_snapshots": "role:compute-user",
|
||||
"network:get_all_networks": "role:compute-user",
|
||||
"network:get_network": "role:compute-user",
|
||||
"network:delete_network": "role:compute-user",
|
||||
"network:disassociate_network": "role:compute-user",
|
||||
"network:get_vifs_by_instance": "role:compute-user",
|
||||
"network:allocate_for_instance": "role:compute-user",
|
||||
"network:deallocate_for_instance": "role:compute-user",
|
||||
"network:validate_networks": "role:compute-user",
|
||||
"network:get_instance_uuids_by_ip_filter": "role:compute-user",
|
||||
"network:get_floating_ip": "role:compute-user",
|
||||
"network:get_floating_ip_pools": "role:compute-user",
|
||||
"network:get_floating_ip_by_address": "role:compute-user",
|
||||
"network:get_floating_ips_by_project": "role:compute-user",
|
||||
"network:get_floating_ips_by_fixed_address": "role:compute-user",
|
||||
"network:allocate_floating_ip": "role:compute-user",
|
||||
"network:deallocate_floating_ip": "role:compute-user",
|
||||
"network:associate_floating_ip": "role:compute-user",
|
||||
"network:disassociate_floating_ip": "role:compute-user",
|
||||
"network:get_fixed_ip": "role:compute-user",
|
||||
"network:add_fixed_ip_to_instance": "role:compute-user",
|
||||
"network:remove_fixed_ip_from_instance": "role:compute-user",
|
||||
"network:add_network_to_project": "role:compute-user",
|
||||
"network:get_instance_nw_info": "role:compute-user",
|
||||
"network:get_dns_domains": "role:compute-user",
|
||||
"network:add_dns_entry": "role:compute-user",
|
||||
"network:modify_dns_entry": "role:compute-user",
|
||||
"network:delete_dns_entry": "role:compute-user",
|
||||
"network:get_dns_entries_by_address": "role:compute-user",
|
||||
"network:get_dns_entries_by_name": "role:compute-user",
|
||||
"network:create_private_dns_domain": "role:compute-user",
|
||||
"network:create_public_dns_domain": "role:compute-user",
|
||||
"network:delete_dns_domain": "role:compute-user"
|
||||
}
|
||||
|
||||
Service management
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Identity service provides identity, token, catalog, and policy
|
||||
services. It consists of:
|
||||
|
||||
* keystone Web Server Gateway Interface (WSGI) service
|
||||
Can be run in a WSGI-capable web server such as Apache httpd to provide
|
||||
the Identity service. The service and administrative APIs are run as
|
||||
separate instances of the WSGI service.
|
||||
|
||||
* Identity service functions
|
||||
Each has a pluggable back end that allow different ways to use the
|
||||
particular service. Most support standard back ends like LDAP or SQL.
|
||||
|
||||
* keystone-all
|
||||
Starts both the service and administrative APIs in a single process.
|
||||
Using federation with keystone-all is not supported. keystone-all is
|
||||
deprecated in favor of the WSGI service. Also, this will be removed
|
||||
in Newton.
|
||||
|
||||
The Identity service also maintains a user that corresponds to each
|
||||
service, such as, a user named ``nova`` for the Compute service, and a
|
||||
special service project called ``service``.
|
||||
|
||||
For information about how to create services and endpoints, see the
|
||||
`OpenStack Administrator Guide <https://docs.openstack.org/admin-guide/
|
||||
cli-manage-services.html>`__.
|
||||
|
||||
Groups
|
||||
~~~~~~
|
||||
|
||||
A group is a collection of users in a domain. Administrators can
|
||||
create groups and add users to them. A role can then be assigned to
|
||||
the group, rather than individual users. Groups were introduced with
|
||||
the Identity API v3.
|
||||
|
||||
Identity API V3 provides the following group-related operations:
|
||||
|
||||
* Create a group
|
||||
|
||||
* Delete a group
|
||||
|
||||
* Update a group (change its name or description)
|
||||
|
||||
* Add a user to a group
|
||||
|
||||
* Remove a user from a group
|
||||
|
||||
* List group members
|
||||
|
||||
* List groups for a user
|
||||
|
||||
* Assign a role on a project to a group
|
||||
|
||||
* Assign a role on a domain to a group
|
||||
|
||||
* Query role assignments to groups
|
||||
|
||||
.. note::
|
||||
|
||||
The Identity service server might not allow all operations. For
|
||||
example, if you use the Identity server with the LDAP Identity
|
||||
back end and group updates are disabled, a request to create,
|
||||
delete, or update a group fails.
|
||||
|
||||
Here are a couple of examples:
|
||||
|
||||
* Group A is granted Role A on Project A. If User A is a member of Group
|
||||
A, when User A gets a token scoped to Project A, the token also
|
||||
includes Role A.
|
||||
|
||||
* Group B is granted Role B on Domain B. If User B is a member of
|
||||
Group B, when User B gets a token scoped to Domain B, the token also
|
||||
includes Role B.
|
@ -1,69 +0,0 @@
|
||||
=============================
|
||||
Domain-specific configuration
|
||||
=============================
|
||||
|
||||
The Identity service supports domain-specific Identity drivers.
|
||||
The drivers allow a domain to have its own LDAP or SQL back end.
|
||||
By default, domain-specific drivers are disabled.
|
||||
|
||||
Domain-specific Identity configuration options can be stored in
|
||||
domain-specific configuration files, or in the Identity SQL
|
||||
database using API REST calls.
|
||||
|
||||
.. note::
|
||||
|
||||
Storing and managing configuration options in an SQL database is
|
||||
experimental in Kilo, and added to the Identity service in the
|
||||
Liberty release.
|
||||
|
||||
Enable drivers for domain-specific configuration files
|
||||
------------------------------------------------------
|
||||
|
||||
To enable domain-specific drivers, set these options in the
|
||||
``/etc/keystone/keystone.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[identity]
|
||||
domain_specific_drivers_enabled = True
|
||||
domain_config_dir = /etc/keystone/domains
|
||||
|
||||
When you enable domain-specific drivers, Identity looks in the
|
||||
``domain_config_dir`` directory for configuration files that are named as
|
||||
``keystone.DOMAIN_NAME.conf``. A domain without a domain-specific
|
||||
configuration file uses options in the primary configuration file.
|
||||
|
||||
Enable drivers for storing configuration options in SQL database
|
||||
----------------------------------------------------------------
|
||||
|
||||
To enable domain-specific drivers, set these options in the
|
||||
``/etc/keystone/keystone.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[identity]
|
||||
domain_specific_drivers_enabled = True
|
||||
domain_configurations_from_database = True
|
||||
|
||||
Any domain-specific configuration options specified through the
|
||||
Identity v3 API will override domain-specific configuration files in the
|
||||
``/etc/keystone/domains`` directory.
|
||||
|
||||
Migrate domain-specific configuration files to the SQL database
|
||||
---------------------------------------------------------------
|
||||
|
||||
You can use the ``keystone-manage`` command to migrate configuration
|
||||
options in domain-specific configuration files to the SQL database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# keystone-manage domain_config_upload --all
|
||||
|
||||
To upload options from a specific domain-configuration file, specify the
|
||||
domain name:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# keystone-manage domain_config_upload --domain-name DOMAIN_NAME
|
||||
|
||||
|
@ -1,41 +0,0 @@
|
||||
=====================================
|
||||
External authentication with Identity
|
||||
=====================================
|
||||
|
||||
When Identity runs in ``apache-httpd``, you can use external
|
||||
authentication methods that differ from the authentication provided by
|
||||
the identity store back end. For example, you can use an SQL identity
|
||||
back end together with X.509 authentication and Kerberos, instead of
|
||||
using the user name and password combination.
|
||||
|
||||
Use HTTPD authentication
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Web servers, like Apache HTTP, support many methods of authentication.
|
||||
Identity can allow the web server to perform the authentication. The web
|
||||
server then passes the authenticated user to Identity by using the
|
||||
``REMOTE_USER`` environment variable. This user must already exist in
|
||||
the Identity back end to get a token from the controller. To use this
|
||||
method, Identity should run on ``apache-httpd``.
|
||||
|
||||
Use X.509
|
||||
~~~~~~~~~
|
||||
|
||||
The following Apache configuration snippet authenticates the user based
|
||||
on a valid X.509 certificate from a known CA:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
<VirtualHost _default_:5000>
|
||||
SSLEngine on
|
||||
SSLCertificateFile /etc/ssl/certs/ssl.cert
|
||||
SSLCertificateKeyFile /etc/ssl/private/ssl.key
|
||||
|
||||
SSLCACertificatePath /etc/ssl/allowed_cas
|
||||
SSLCARevocationPath /etc/ssl/allowed_cas
|
||||
SSLUserName SSL_CLIENT_S_DN_CN
|
||||
SSLVerifyClient require
|
||||
SSLVerifyDepth 10
|
||||
|
||||
(...)
|
||||
</VirtualHost>
|
@ -1,345 +0,0 @@
|
||||
===================================
|
||||
Fernet - Frequently Asked Questions
|
||||
===================================
|
||||
|
||||
The following questions have been asked periodically since the initial release
|
||||
of the fernet token format in Kilo.
|
||||
|
||||
What are the different types of keys?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
A key repository is required by keystone in order to create fernet tokens.
|
||||
These keys are used to encrypt and decrypt the information that makes up the
|
||||
payload of the token. Each key in the repository can have one of three states.
|
||||
The state of the key determines how keystone uses a key with fernet tokens. The
|
||||
different types are as follows:
|
||||
|
||||
Primary key:
|
||||
There is only ever one primary key in a key repository. The primary key is
|
||||
allowed to encrypt and decrypt tokens. This key is always named as the
|
||||
highest index in the repository.
|
||||
Secondary key:
|
||||
A secondary key was at one point a primary key, but has been demoted in place
|
||||
of another primary key. It is only allowed to decrypt tokens. Since it was
|
||||
the primary at some point in time, its existence in the key repository is
|
||||
justified. Keystone needs to be able to decrypt tokens that were created with
|
||||
old primary keys.
|
||||
Staged key:
|
||||
The staged key is a special key that shares some similarities with secondary
|
||||
keys. There can only ever be one staged key in a repository and it must
|
||||
exist. Just like secondary keys, staged keys have the ability to decrypt
|
||||
tokens. Unlike secondary keys, staged keys have never been a primary key. In
|
||||
fact, they are opposites since the staged key will always be the next primary
|
||||
key. This helps clarify the name because they are the next key staged to be
|
||||
the primary key. This key is always named as ``0`` in the key repository.
|
||||
|
||||
So, how does a staged key help me and why do I care about it?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The fernet keys have a natural lifecycle. Each key starts as a staged key, is
|
||||
promoted to be the primary key, and then demoted to be a secondary key. New
|
||||
tokens can only be encrypted with a primary key. Secondary and staged keys are
|
||||
never used to encrypt token. The staged key is a special key given the order of
|
||||
events and the attributes of each type of key. The staged key is the only key
|
||||
in the repository that has not had a chance to encrypt any tokens yet, but it
|
||||
is still allowed to decrypt tokens. As an operator, this gives you the chance
|
||||
to perform a key rotation on one keystone node, and distribute the new key set
|
||||
over a span of time. This does not require the distribution to take place in an
|
||||
ultra short period of time. Tokens encrypted with a primary key can be
|
||||
decrypted, and validated, on other nodes where that key is still staged.
|
||||
|
||||
Where do I put my key repository?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The key repository is specified using the ``key_repository`` option in the
|
||||
keystone configuration file. The keystone process should be able to read and
|
||||
write to this location but it should be kept secret otherwise. Currently,
|
||||
keystone only supports file-backed key repositories.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[fernet_tokens]
|
||||
key_repository = /etc/keystone/fernet-keys/
|
||||
|
||||
What is the recommended way to rotate and distribute keys?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The :command:`keystone-manage` command line utility includes a key rotation
|
||||
mechanism. This mechanism will initialize and rotate keys but does not make
|
||||
an effort to distribute keys across keystone nodes. The distribution of keys
|
||||
across a keystone deployment is best handled through configuration management
|
||||
tooling. Use :command:`keystone-manage fernet_rotate` to rotate the key
|
||||
repository.
|
||||
|
||||
Do fernet tokens still expire?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Yes, fernet tokens can expire just like any other keystone token formats.
|
||||
|
||||
Why should I choose fernet tokens over UUID tokens?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Even though fernet tokens operate very similarly to UUID tokens, they do not
|
||||
require persistence. The keystone token database no longer suffers bloat as a
|
||||
side effect of authentication. Pruning expired tokens from the token database
|
||||
is no longer required when using fernet tokens. Because fernet tokens do not
|
||||
require persistence, they do not have to be replicated. As long as each
|
||||
keystone node shares the same key repository, fernet tokens can be created and
|
||||
validated instantly across nodes.
|
||||
|
||||
Why should I choose fernet tokens over PKI or PKIZ tokens?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The arguments for using fernet over PKI and PKIZ remain the same as UUID, in
|
||||
addition to the fact that fernet tokens are much smaller than PKI and PKIZ
|
||||
tokens. PKI and PKIZ tokens still require persistent storage and can sometimes
|
||||
cause issues due to their size. This issue is mitigated when switching to
|
||||
fernet because fernet tokens are kept under a 250 byte limit. PKI and PKIZ
|
||||
tokens typically exceed 1600 bytes in length. The length of a PKI or PKIZ token
|
||||
is dependent on the size of the deployment. Bigger service catalogs will result
|
||||
in longer token lengths. This pattern does not exist with fernet tokens because
|
||||
the contents of the encrypted payload is kept to a minimum.
|
||||
|
||||
Should I rotate and distribute keys from the same keystone node every rotation?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
No, but the relationship between rotation and distribution should be lock-step.
|
||||
Once you rotate keys on one keystone node, the key repository from that node
|
||||
should be distributed to the rest of the cluster. Once you confirm that each
|
||||
node has the same key repository state, you could rotate and distribute from
|
||||
any other node in the cluster.
|
||||
|
||||
If the rotation and distribution are not lock-step, a single keystone node in
|
||||
the deployment will create tokens with a primary key that no other node has as
|
||||
a staged key. This will cause tokens generated from one keystone node to fail
|
||||
validation on other keystone nodes.
|
||||
|
||||
How do I add new keystone nodes to a deployment?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The keys used to create fernet tokens should be treated like super secret
|
||||
configuration files, similar to an SSL secret key. Before a node is allowed to
|
||||
join an existing cluster, issuing and validating tokens, it should have the
|
||||
same key repository as the rest of the nodes in the cluster.
|
||||
|
||||
How should I approach key distribution?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Remember that key distribution is only required in multi-node keystone
|
||||
deployments. If you only have one keystone node serving requests in your
|
||||
deployment, key distribution is unnecessary.
|
||||
|
||||
Key distribution is a problem best approached from the deployment's current
|
||||
configuration management system. Since not all deployments use the same
|
||||
configuration management systems, it makes sense to explore options around what
|
||||
is already available for managing keys, while keeping the secrecy of the keys
|
||||
in mind. Many configuration management tools can leverage something like
|
||||
``rsync`` to manage key distribution.
|
||||
|
||||
Key rotation is a single operation that promotes the current staged key to
|
||||
primary, creates a new staged key, and prunes old secondary keys. It is easiest
|
||||
to do this on a single node and verify the rotation took place properly before
|
||||
distributing the key repository to the rest of the cluster. The concept behind
|
||||
the staged key breaks the expectation that key rotation and key distribution
|
||||
have to be done in a single step. With the staged key, we have time to inspect
|
||||
the new key repository before syncing state with the rest of the cluster. Key
|
||||
distribution should be an operation that can run in succession until it
|
||||
succeeds. The following might help illustrate the isolation between key
|
||||
rotation and key distribution.
|
||||
|
||||
#. Ensure all keystone nodes in the deployment have the same key repository.
|
||||
#. Pick a keystone node in the cluster to rotate from.
|
||||
#. Rotate keys.
|
||||
|
||||
#. Was it successful?
|
||||
|
||||
#. If no, investigate issues with the particular keystone node you
|
||||
rotated keys on. Fernet keys are small and the operation for
|
||||
rotation is trivial. There should not be much room for error in key
|
||||
rotation. It is possible that the user does not have the ability to
|
||||
write new keys to the key repository. Log output from
|
||||
``keystone-manage fernet_rotate`` should give more information into
|
||||
specific failures.
|
||||
|
||||
#. If yes, you should see a new staged key. The old staged key should
|
||||
be the new primary. Depending on the ``max_active_keys`` limit you
|
||||
might have secondary keys that were pruned. At this point, the node
|
||||
that you rotated on will be creating fernet tokens with a primary
|
||||
key that all other nodes should have as the staged key. This is why
|
||||
we checked the state of all key repositories in Step one. All other
|
||||
nodes in the cluster should be able to decrypt tokens created with
|
||||
the new primary key. At this point, we are ready to distribute the
|
||||
new key set.
|
||||
|
||||
#. Distribute the new key repository.
|
||||
|
||||
#. Was it successful?
|
||||
|
||||
#. If yes, you should be able to confirm that all nodes in the cluster
|
||||
have the same key repository that was introduced in Step 3. All
|
||||
nodes in the cluster will be creating tokens with the primary key
|
||||
that was promoted in Step 3. No further action is required until the
|
||||
next schedule key rotation.
|
||||
|
||||
#. If no, try distributing again. Remember that we already rotated the
|
||||
repository and performing another rotation at this point will
|
||||
result in tokens that cannot be validated across certain hosts.
|
||||
Specifically, the hosts that did not get the latest key set. You
|
||||
should be able to distribute keys until it is successful. If certain
|
||||
nodes have issues syncing, it could be permission or network issues
|
||||
and those should be resolved before subsequent rotations.
|
||||
|
||||
How long should I keep my keys around?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The fernet tokens that keystone creates are only secure as the keys creating
|
||||
them. With staged keys the penalty of key rotation is low, allowing you to err
|
||||
on the side of security and rotate weekly, daily, or even hourly. Ultimately,
|
||||
this should be less time than it takes an attacker to break a ``AES256`` key
|
||||
and a ``SHA256 HMAC``.
|
||||
|
||||
Is a fernet token still a bearer token?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Yes, and they follow exactly the same validation path as UUID tokens, with the
|
||||
exception of being written to, and read from, a back end. If someone
|
||||
compromises your fernet token, they have the power to do all the operations you
|
||||
are allowed to do.
|
||||
|
||||
What if I need to revoke all my tokens?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To invalidate every token issued from keystone and start fresh, remove the
|
||||
current key repository, create a new key set, and redistribute it to all nodes
|
||||
in the cluster. This will render every token issued from keystone as invalid
|
||||
regardless if the token has actually expired. When a client goes to
|
||||
re-authenticate, the new token will have been created with a new fernet key.
|
||||
|
||||
What can an attacker do if they compromise a fernet key in my deployment?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If any key used in the key repository is compromised, an attacker will be able
|
||||
to build their own tokens. If they know the ID of an administrator on a
|
||||
project, they could generate administrator tokens for the project. They will be
|
||||
able to generate their own tokens until the compromised key has been removed
|
||||
from from the repository.
|
||||
|
||||
I rotated keys and now tokens are invalidating early, what did I do?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Using fernet tokens requires some awareness around token expiration and the key
|
||||
lifecycle. You do not want to rotate so often that secondary keys are removed
|
||||
that might still be needed to decrypt unexpired tokens. If this happens, you
|
||||
will not be able to decrypt the token because the key the was used to encrypt
|
||||
it is now gone. Only remove keys that you know are not being used to encrypt or
|
||||
decrypt tokens.
|
||||
|
||||
For example, your token is valid for 24 hours and we want to rotate keys every
|
||||
six hours. We will need to make sure tokens that were created at 08:00 AM on
|
||||
Monday are still valid at 07:00 AM on Tuesday, assuming they were not
|
||||
prematurely revoked. To accomplish this, we will want to make sure we set
|
||||
``max_active_keys=6`` in our keystone configuration file. This will allow us to
|
||||
hold all keys that might still be required to validate a previous token, but
|
||||
keeps the key repository limited to only the keys that are needed.
|
||||
|
||||
The number of ``max_active_keys`` for a deployment can be determined by
|
||||
dividing the token lifetime, in hours, by the frequency of rotation in hours
|
||||
and adding two. Better illustrated as::
|
||||
|
||||
token_expiration = 24
|
||||
rotation_frequency = 6
|
||||
max_active_keys = (token_expiration / rotation_frequency) + 2
|
||||
|
||||
The reason for adding two additional keys to the count is to include the staged
|
||||
key and a buffer key. This can be shown based on the previous example. We
|
||||
initially setup the key repository at 6:00 AM on Monday, and the initial state
|
||||
looks like:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ls -la /etc/keystone/fernet-keys/
|
||||
drwx------ 2 keystone keystone 4096 .
|
||||
drwxr-xr-x 3 keystone keystone 4096 ..
|
||||
-rw------- 1 keystone keystone 44 0 (staged key)
|
||||
-rw------- 1 keystone keystone 44 1 (primary key)
|
||||
|
||||
All tokens created after 6:00 AM are encrypted with key ``1``. At 12:00 PM we
|
||||
will rotate keys again, resulting in,
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ls -la /etc/keystone/fernet-keys/
|
||||
drwx------ 2 keystone keystone 4096 .
|
||||
drwxr-xr-x 3 keystone keystone 4096 ..
|
||||
-rw------- 1 keystone keystone 44 0 (staged key)
|
||||
-rw------- 1 keystone keystone 44 1 (secondary key)
|
||||
-rw------- 1 keystone keystone 44 2 (primary key)
|
||||
|
||||
We are still able to validate tokens created between 6:00 - 11:59 AM because
|
||||
the ``1`` key still exists as a secondary key. All tokens issued after 12:00 PM
|
||||
will be encrypted with key ``2``. At 6:00 PM we do our next rotation, resulting
|
||||
in:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ls -la /etc/keystone/fernet-keys/
|
||||
drwx------ 2 keystone keystone 4096 .
|
||||
drwxr-xr-x 3 keystone keystone 4096 ..
|
||||
-rw------- 1 keystone keystone 44 0 (staged key)
|
||||
-rw------- 1 keystone keystone 44 1 (secondary key)
|
||||
-rw------- 1 keystone keystone 44 2 (secondary key)
|
||||
-rw------- 1 keystone keystone 44 3 (primary key)
|
||||
|
||||
It is still possible to validate tokens issued from 6:00 AM - 5:59 PM because
|
||||
keys ``1`` and ``2`` exist as secondary keys. Every token issued until 11:59 PM
|
||||
will be encrypted with key ``3``, and at 12:00 AM we do our next rotation:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ls -la /etc/keystone/fernet-keys/
|
||||
drwx------ 2 keystone keystone 4096 .
|
||||
drwxr-xr-x 3 keystone keystone 4096 ..
|
||||
-rw------- 1 keystone keystone 44 0 (staged key)
|
||||
-rw------- 1 keystone keystone 44 1 (secondary key)
|
||||
-rw------- 1 keystone keystone 44 2 (secondary key)
|
||||
-rw------- 1 keystone keystone 44 3 (secondary key)
|
||||
-rw------- 1 keystone keystone 44 4 (primary key)
|
||||
|
||||
Just like before, we can still validate tokens issued from 6:00 AM the previous
|
||||
day until 5:59 AM today because keys ``1`` - ``4`` are present. At 6:00 AM,
|
||||
tokens issued from the previous day will start to expire and we do our next
|
||||
scheduled rotation:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ls -la /etc/keystone/fernet-keys/
|
||||
drwx------ 2 keystone keystone 4096 .
|
||||
drwxr-xr-x 3 keystone keystone 4096 ..
|
||||
-rw------- 1 keystone keystone 44 0 (staged key)
|
||||
-rw------- 1 keystone keystone 44 1 (secondary key)
|
||||
-rw------- 1 keystone keystone 44 2 (secondary key)
|
||||
-rw------- 1 keystone keystone 44 3 (secondary key)
|
||||
-rw------- 1 keystone keystone 44 4 (secondary key)
|
||||
-rw------- 1 keystone keystone 44 5 (primary key)
|
||||
|
||||
Tokens will naturally expire after 6:00 AM, but we will not be able to remove
|
||||
key ``1`` until the next rotation because it encrypted all tokens from 6:00 AM
|
||||
to 12:00 PM the day before. Once we do our next rotation, which is at 12:00 PM,
|
||||
the ``1`` key will be pruned from the repository:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ls -la /etc/keystone/fernet-keys/
|
||||
drwx------ 2 keystone keystone 4096 .
|
||||
drwxr-xr-x 3 keystone keystone 4096 ..
|
||||
-rw------- 1 keystone keystone 44 0 (staged key)
|
||||
-rw------- 1 keystone keystone 44 2 (secondary key)
|
||||
-rw------- 1 keystone keystone 44 3 (secondary key)
|
||||
-rw------- 1 keystone keystone 44 4 (secondary key)
|
||||
-rw------- 1 keystone keystone 44 5 (secondary key)
|
||||
-rw------- 1 keystone keystone 44 6 (primary key)
|
||||
|
||||
If keystone were to receive a token that was created between 6:00 AM and 12:00
|
||||
PM the day before, encrypted with the ``1`` key, it would not be valid because
|
||||
it was already expired. This makes it possible for us to remove the ``1`` key
|
||||
from the repository without negative validation side-effects.
|
@ -1,453 +0,0 @@
|
||||
.. _integrate-identity-with-ldap:
|
||||
|
||||
============================
|
||||
Integrate Identity with LDAP
|
||||
============================
|
||||
|
||||
The OpenStack Identity service supports integration with existing LDAP
|
||||
directories for authentication and authorization services. LDAP back
|
||||
ends require initialization before configuring the OpenStack Identity
|
||||
service to work with it. For more information, see `Setting up LDAP
|
||||
for use with Keystone <https://wiki.openstack.org/wiki/OpenLDAP>`__.
|
||||
|
||||
When the OpenStack Identity service is configured to use LDAP back ends,
|
||||
you can split authentication (using the *identity* feature) and
|
||||
authorization (using the *assignment* feature).
|
||||
|
||||
The *identity* feature enables administrators to manage users and groups
|
||||
by each domain or the OpenStack Identity service entirely.
|
||||
|
||||
The *assignment* feature enables administrators to manage project role
|
||||
authorization using the OpenStack Identity service SQL database, while
|
||||
providing user authentication through the LDAP directory.
|
||||
|
||||
.. _identity_ldap_server_setup:
|
||||
|
||||
Identity LDAP server set up
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. important::
|
||||
|
||||
For the OpenStack Identity service to access LDAP servers, you must
|
||||
enable the ``authlogin_nsswitch_use_ldap`` boolean value for SELinux
|
||||
on the server running the OpenStack Identity service. To enable and
|
||||
make the option persistent across reboots, set the following boolean
|
||||
value as the root user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# setsebool -P authlogin_nsswitch_use_ldap on
|
||||
|
||||
The Identity configuration is split into two separate back ends; identity
|
||||
(back end for users and groups), and assignments (back end for domains,
|
||||
projects, roles, role assignments). To configure Identity, set options
|
||||
in the ``/etc/keystone/keystone.conf`` file. See
|
||||
:ref:`integrate-identity-backend-ldap` for Identity back end configuration
|
||||
examples. Modify these examples as needed.
|
||||
|
||||
**To define the destination LDAP server**
|
||||
|
||||
#. Define the destination LDAP server in the
|
||||
``/etc/keystone/keystone.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ldap]
|
||||
url = ldap://localhost
|
||||
user = dc=Manager,dc=example,dc=org
|
||||
password = samplepassword
|
||||
suffix = dc=example,dc=org
|
||||
|
||||
**Additional LDAP integration settings**
|
||||
|
||||
Set these options in the ``/etc/keystone/keystone.conf`` file for a
|
||||
single LDAP server, or ``/etc/keystone/domains/keystone.DOMAIN_NAME.conf``
|
||||
files for multiple back ends. Example configurations appear below each
|
||||
setting summary:
|
||||
|
||||
**Query option**
|
||||
|
||||
.. hlist::
|
||||
:columns: 1
|
||||
|
||||
* Use ``query_scope`` to control the scope level of data presented
|
||||
(search only the first level or search an entire sub-tree)
|
||||
through LDAP.
|
||||
* Use ``page_size`` to control the maximum results per page. A value
|
||||
of zero disables paging.
|
||||
* Use ``alias_dereferencing`` to control the LDAP dereferencing
|
||||
option for queries.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ldap]
|
||||
query_scope = sub
|
||||
page_size = 0
|
||||
alias_dereferencing = default
|
||||
chase_referrals =
|
||||
|
||||
**Debug**
|
||||
|
||||
Use ``debug_level`` to set the LDAP debugging level for LDAP calls.
|
||||
A value of zero means that debugging is not enabled.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ldap]
|
||||
debug_level = 0
|
||||
|
||||
.. warning::
|
||||
|
||||
This value is a bitmask, consult your LDAP documentation for
|
||||
possible values.
|
||||
|
||||
**Connection pooling**
|
||||
|
||||
Use ``use_pool`` to enable LDAP connection pooling. Configure the
|
||||
connection pool size, maximum retry, reconnect trials, timeout (-1
|
||||
indicates indefinite wait) and lifetime in seconds.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ldap]
|
||||
use_pool = true
|
||||
pool_size = 10
|
||||
pool_retry_max = 3
|
||||
pool_retry_delay = 0.1
|
||||
pool_connection_timeout = -1
|
||||
pool_connection_lifetime = 600
|
||||
|
||||
**Connection pooling for end user authentication**
|
||||
|
||||
Use ``use_auth_pool`` to enable LDAP connection pooling for end user
|
||||
authentication. Configure the connection pool size and lifetime in
|
||||
seconds.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ldap]
|
||||
use_auth_pool = false
|
||||
auth_pool_size = 100
|
||||
auth_pool_connection_lifetime = 60
|
||||
|
||||
When you have finished the configuration, restart the OpenStack Identity
|
||||
service.
|
||||
|
||||
.. warning::
|
||||
|
||||
During the service restart, authentication and authorization are
|
||||
unavailable.
|
||||
|
||||
.. _integrate-identity-backend-ldap:
|
||||
|
||||
Integrate Identity back end with LDAP
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Identity back end contains information for users, groups, and group
|
||||
member lists. Integrating the Identity back end with LDAP allows
|
||||
administrators to use users and groups in LDAP.
|
||||
|
||||
.. important::
|
||||
|
||||
For OpenStack Identity service to access LDAP servers, you must
|
||||
define the destination LDAP server in the
|
||||
``/etc/keystone/keystone.conf`` file. For more information,
|
||||
see :ref:`identity_ldap_server_setup`.
|
||||
|
||||
**To integrate one Identity back end with LDAP**
|
||||
|
||||
#. Enable the LDAP Identity driver in the ``/etc/keystone/keystone.conf``
|
||||
file. This allows LDAP as an identity back end:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[identity]
|
||||
#driver = sql
|
||||
driver = ldap
|
||||
|
||||
#. Create the organizational units (OU) in the LDAP directory, and define
|
||||
the corresponding location in the ``/etc/keystone/keystone.conf``
|
||||
file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ldap]
|
||||
user_tree_dn = ou=Users,dc=example,dc=org
|
||||
user_objectclass = inetOrgPerson
|
||||
|
||||
group_tree_dn = ou=Groups,dc=example,dc=org
|
||||
group_objectclass = groupOfNames
|
||||
|
||||
.. note::
|
||||
|
||||
These schema attributes are extensible for compatibility with
|
||||
various schemas. For example, this entry maps to the person
|
||||
attribute in Active Directory:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
user_objectclass = person
|
||||
|
||||
#. A read-only implementation is recommended for LDAP integration. These
|
||||
permissions are applied to object types in the
|
||||
``/etc/keystone/keystone.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ldap]
|
||||
user_allow_create = False
|
||||
user_allow_update = False
|
||||
user_allow_delete = False
|
||||
|
||||
group_allow_create = False
|
||||
group_allow_update = False
|
||||
group_allow_delete = False
|
||||
|
||||
Restart the OpenStack Identity service.
|
||||
|
||||
.. warning::
|
||||
|
||||
During service restart, authentication and authorization are
|
||||
unavailable.
|
||||
|
||||
**To integrate multiple Identity back ends with LDAP**
|
||||
|
||||
#. Set the following options in the ``/etc/keystone/keystone.conf``
|
||||
file:
|
||||
|
||||
#. Enable the LDAP driver:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[identity]
|
||||
#driver = sql
|
||||
driver = ldap
|
||||
|
||||
#. Enable domain-specific drivers:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[identity]
|
||||
domain_specific_drivers_enabled = True
|
||||
domain_config_dir = /etc/keystone/domains
|
||||
|
||||
#. Restart the OpenStack Identity service.
|
||||
|
||||
.. warning::
|
||||
|
||||
During service restart, authentication and authorization are
|
||||
unavailable.
|
||||
|
||||
#. List the domains using the dashboard, or the OpenStackClient CLI. Refer
|
||||
to the `Command List
|
||||
<https://docs.openstack.org/developer/python-openstackclient/command-list.html>`__
|
||||
for a list of OpenStackClient commands.
|
||||
|
||||
#. Create domains using OpenStack dashboard, or the OpenStackClient CLI.
|
||||
|
||||
#. For each domain, create a domain-specific configuration file in the
|
||||
``/etc/keystone/domains`` directory. Use the file naming convention
|
||||
``keystone.DOMAIN_NAME.conf``, where DOMAIN\_NAME is the domain name
|
||||
assigned in the previous step.
|
||||
|
||||
.. note::
|
||||
|
||||
The options set in the
|
||||
``/etc/keystone/domains/keystone.DOMAIN_NAME.conf`` file will
|
||||
override options in the ``/etc/keystone/keystone.conf`` file.
|
||||
|
||||
#. Define the destination LDAP server in the
|
||||
``/etc/keystone/domains/keystone.DOMAIN_NAME.conf`` file. For example:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ldap]
|
||||
url = ldap://localhost
|
||||
user = dc=Manager,dc=example,dc=org
|
||||
password = samplepassword
|
||||
suffix = dc=example,dc=org
|
||||
|
||||
#. Create the organizational units (OU) in the LDAP directories, and define
|
||||
their corresponding locations in the
|
||||
``/etc/keystone/domains/keystone.DOMAIN_NAME.conf`` file. For example:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ldap]
|
||||
user_tree_dn = ou=Users,dc=example,dc=org
|
||||
user_objectclass = inetOrgPerson
|
||||
|
||||
group_tree_dn = ou=Groups,dc=example,dc=org
|
||||
group_objectclass = groupOfNames
|
||||
|
||||
.. note::
|
||||
|
||||
These schema attributes are extensible for compatibility with
|
||||
various schemas. For example, this entry maps to the person
|
||||
attribute in Active Directory:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
user_objectclass = person
|
||||
|
||||
#. A read-only implementation is recommended for LDAP integration. These
|
||||
permissions are applied to object types in the
|
||||
``/etc/keystone/domains/keystone.DOMAIN_NAME.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ldap]
|
||||
user_allow_create = False
|
||||
user_allow_update = False
|
||||
user_allow_delete = False
|
||||
|
||||
group_allow_create = False
|
||||
group_allow_update = False
|
||||
group_allow_delete = False
|
||||
|
||||
#. Restart the OpenStack Identity service.
|
||||
|
||||
.. warning::
|
||||
|
||||
During service restart, authentication and authorization are
|
||||
unavailable.
|
||||
|
||||
**Additional LDAP integration settings**
|
||||
|
||||
Set these options in the ``/etc/keystone/keystone.conf`` file for a
|
||||
single LDAP server, or ``/etc/keystone/domains/keystone.DOMAIN_NAME.conf``
|
||||
files for multiple back ends. Example configurations appear below each
|
||||
setting summary:
|
||||
|
||||
Filters
|
||||
Use filters to control the scope of data presented through LDAP.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ldap]
|
||||
user_filter = (memberof=cn=openstack-users,ou=workgroups,dc=example,dc=org)
|
||||
group_filter =
|
||||
|
||||
Identity attribute mapping
|
||||
Mask account status values (include any additional attribute
|
||||
mappings) for compatibility with various directory services.
|
||||
Superfluous accounts are filtered with ``user_filter``.
|
||||
|
||||
Setting attribute ignore to list of attributes stripped off on
|
||||
update.
|
||||
|
||||
For example, you can mask Active Directory account status attributes
|
||||
in the ``/etc/keystone/keystone.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ldap]
|
||||
user_id_attribute = cn
|
||||
user_name_attribute = sn
|
||||
user_mail_attribute = mail
|
||||
user_pass_attribute = userPassword
|
||||
user_enabled_attribute = userAccountControl
|
||||
user_enabled_mask = 2
|
||||
user_enabled_invert = false
|
||||
user_enabled_default = 512
|
||||
user_default_project_id_attribute =
|
||||
user_additional_attribute_mapping =
|
||||
|
||||
group_id_attribute = cn
|
||||
group_name_attribute = ou
|
||||
group_member_attribute = member
|
||||
group_desc_attribute = description
|
||||
group_additional_attribute_mapping =
|
||||
|
||||
Enabled emulation
|
||||
An alternative method to determine if a user is enabled or not is by
|
||||
checking if that user is a member of the emulation group.
|
||||
|
||||
Use DN of the group entry to hold enabled user when using enabled
|
||||
emulation.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ldap]
|
||||
user_enabled_emulation = false
|
||||
user_enabled_emulation_dn = false
|
||||
|
||||
When you have finished configuration, restart the OpenStack Identity
|
||||
service.
|
||||
|
||||
.. warning::
|
||||
|
||||
During service restart, authentication and authorization are
|
||||
unavailable.
|
||||
|
||||
Secure the OpenStack Identity service connection to an LDAP back end
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Identity service supports the use of TLS to encrypt LDAP traffic.
|
||||
Before configuring this, you must first verify where your certificate
|
||||
authority file is located. For more information, see the
|
||||
`OpenStack Security Guide SSL introduction <https://docs.openstack.org/
|
||||
security-guide/secure-communication/introduction-to-ssl-and-tls.html>`_.
|
||||
|
||||
Once you verify the location of your certificate authority file:
|
||||
|
||||
**To configure TLS encryption on LDAP traffic**
|
||||
|
||||
#. Open the ``/etc/keystone/keystone.conf`` configuration file.
|
||||
|
||||
#. Find the ``[ldap]`` section.
|
||||
|
||||
#. In the ``[ldap]`` section, set the ``use_tls`` configuration key to
|
||||
``True``. Doing so will enable TLS.
|
||||
|
||||
#. Configure the Identity service to use your certificate authorities file.
|
||||
To do so, set the ``tls_cacertfile`` configuration key in the ``ldap``
|
||||
section to the certificate authorities file's path.
|
||||
|
||||
.. note::
|
||||
|
||||
You can also set the ``tls_cacertdir`` (also in the ``ldap``
|
||||
section) to the directory where all certificate authorities files
|
||||
are kept. If both ``tls_cacertfile`` and ``tls_cacertdir`` are set,
|
||||
then the latter will be ignored.
|
||||
|
||||
#. Specify what client certificate checks to perform on incoming TLS
|
||||
sessions from the LDAP server. To do so, set the ``tls_req_cert``
|
||||
configuration key in the ``[ldap]`` section to ``demand``, ``allow``, or
|
||||
``never``:
|
||||
|
||||
.. hlist::
|
||||
:columns: 1
|
||||
|
||||
* ``demand`` - The LDAP server always receives certificate
|
||||
requests. The session terminates if no certificate
|
||||
is provided, or if the certificate provided cannot be verified
|
||||
against the existing certificate authorities file.
|
||||
* ``allow`` - The LDAP server always receives certificate
|
||||
requests. The session will proceed as normal even if a certificate
|
||||
is not provided. If a certificate is provided but it cannot be
|
||||
verified against the existing certificate authorities file, the
|
||||
certificate will be ignored and the session will proceed as
|
||||
normal.
|
||||
* ``never`` - A certificate will never be requested.
|
||||
|
||||
On distributions that include openstack-config, you can configure TLS
|
||||
encryption on LDAP traffic by running the following commands instead.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# openstack-config --set /etc/keystone/keystone.conf \
|
||||
ldap use_tls True
|
||||
# openstack-config --set /etc/keystone/keystone.conf \
|
||||
ldap tls_cacertfile ``CA_FILE``
|
||||
# openstack-config --set /etc/keystone/keystone.conf \
|
||||
ldap tls_req_cert ``CERT_BEHAVIOR``
|
||||
|
||||
Where:
|
||||
|
||||
- ``CA_FILE`` is the absolute path to the certificate authorities file
|
||||
that should be used to encrypt LDAP traffic.
|
||||
|
||||
- ``CERT_BEHAVIOR`` specifies what client certificate checks to perform
|
||||
on an incoming TLS session from the LDAP server (``demand``,
|
||||
``allow``, or ``never``).
|
@ -1,83 +0,0 @@
|
||||
|
||||
Example usage and Identity features
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``openstack`` CLI is used to interact with the Identity service.
|
||||
It is set up to expect commands in the general
|
||||
form of ``openstack command argument``, followed by flag-like keyword
|
||||
arguments to provide additional (often optional) information. For
|
||||
example, the :command:`openstack user list` and
|
||||
:command:`openstack project create` commands can be invoked as follows:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Using token auth env variables
|
||||
export OS_SERVICE_ENDPOINT=http://127.0.0.1:5000/v2.0/
|
||||
export OS_SERVICE_TOKEN=secrete_token
|
||||
openstack user list
|
||||
openstack project create demo --domain default
|
||||
|
||||
# Using token auth flags
|
||||
openstack --os-token secrete --os-endpoint http://127.0.0.1:5000/v2.0/ user list
|
||||
openstack --os-token secrete --os-endpoint http://127.0.0.1:5000/v2.0/ project create demo
|
||||
|
||||
# Using user + password + project_name env variables
|
||||
export OS_USERNAME=admin
|
||||
export OS_PASSWORD=secrete
|
||||
export OS_PROJECT_NAME=admin
|
||||
openstack user list
|
||||
openstack project create demo --domain default
|
||||
|
||||
# Using user + password + project-name flags
|
||||
openstack --os-username admin --os-password secrete --os-project-name admin user list
|
||||
openstack --os-username admin --os-password secrete --os-project-name admin project create demo
|
||||
|
||||
|
||||
Logging
|
||||
-------
|
||||
|
||||
You configure logging externally to the rest of Identity. The name of
|
||||
the file specifying the logging configuration is set using the
|
||||
``log_config`` option in the ``[DEFAULT]`` section of the
|
||||
``/etc/keystone/keystone.conf`` file. To route logging through syslog,
|
||||
set ``use_syslog=true`` in the ``[DEFAULT]`` section.
|
||||
|
||||
A sample logging configuration file is available with the project in
|
||||
``etc/logging.conf.sample``. Like other OpenStack projects, Identity
|
||||
uses the Python logging module, which provides extensive configuration
|
||||
options that let you define the output levels and formats.
|
||||
|
||||
|
||||
User CRUD
|
||||
---------
|
||||
|
||||
Identity provides a user CRUD (Create, Read, Update, and Delete) filter that
|
||||
Administrators can add to the ``public_api`` pipeline. The user CRUD filter
|
||||
enables users to use a HTTP PATCH to change their own password. To enable
|
||||
this extension you should define a ``user_crud_extension`` filter, insert
|
||||
it after the ``*_body`` middleware and before the ``public_service``
|
||||
application in the ``public_api`` WSGI pipeline in
|
||||
``keystone-paste.ini``. For example:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[filter:user_crud_extension]
|
||||
paste.filter_factory = keystone.contrib.user_crud:CrudExtension.factory
|
||||
|
||||
[pipeline:public_api]
|
||||
pipeline = sizelimit url_normalize request_id build_auth_context token_auth admin_token_auth json_body ec2_extension user_crud_extension public_service
|
||||
|
||||
Each user can then change their own password with a HTTP PATCH.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ curl -X PATCH http://localhost:5000/v2.0/OS-KSCRUD/users/USERID -H "Content-type: application/json" \
|
||||
-H "X_Auth_Token: AUTHTOKENID" -d '{"user": {"password": "ABCD", "original_password": "DCBA"}}'
|
||||
|
||||
In addition to changing their password, all current tokens for the user
|
||||
are invalidated.
|
||||
|
||||
.. note::
|
||||
|
||||
Only use a KVS back end for tokens when testing.
|
||||
|
@ -1,31 +0,0 @@
|
||||
.. _identity_management:
|
||||
|
||||
===================
|
||||
Identity management
|
||||
===================
|
||||
|
||||
OpenStack Identity, code-named keystone, is the default Identity
|
||||
management system for OpenStack. After you install Identity, you
|
||||
configure it through the ``/etc/keystone/keystone.conf``
|
||||
configuration file and, possibly, a separate logging configuration
|
||||
file. You initialize data into Identity by using the ``keystone``
|
||||
command-line client.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
identity-concepts.rst
|
||||
identity-certificates-for-pki.rst
|
||||
identity-domain-specific-config.rst
|
||||
identity-external-authentication.rst
|
||||
identity-integrate-with-ldap.rst
|
||||
identity-tokens.rst
|
||||
identity-token-binding.rst
|
||||
identity-fernet-token-faq.rst
|
||||
identity-use-trusts.rst
|
||||
identity-caching-layer.rst
|
||||
identity-security-compliance.rst
|
||||
identity-keystone-usage-and-features.rst
|
||||
identity-auth-token-middleware.rst
|
||||
identity-service-api-protection.rst
|
||||
identity-troubleshoot.rst
|
@ -1,167 +0,0 @@
|
||||
.. _identity_security_compliance:
|
||||
|
||||
===============================
|
||||
Security compliance and PCI-DSS
|
||||
===============================
|
||||
|
||||
As of the Newton release, the Identity service contains additional security
|
||||
compliance features, specifically to satisfy Payment Card Industry -
|
||||
Data Security Standard (PCI-DSS) v3.1 requirements. See
|
||||
`Security Hardening PCI-DSS`_ for more information on PCI-DSS.
|
||||
|
||||
Security compliance features are disabled by default and most of the features
|
||||
only apply to the SQL backend for the identity driver. Other identity backends,
|
||||
such as LDAP, should implement their own security controls.
|
||||
|
||||
Enable these features by changing the configuration settings under the
|
||||
``[security_compliance]`` section in ``keystone.conf``.
|
||||
|
||||
Setting the account lockout threshold
|
||||
-------------------------------------
|
||||
|
||||
The account lockout feature limits the number of incorrect password attempts.
|
||||
If a user fails to authenticate after the maximum number of attempts, the
|
||||
service disables the user. Re-enable the user by explicitly setting the
|
||||
enable user attribute with the update user API call, either
|
||||
`v2.0`_ or `v3`_.
|
||||
|
||||
You set the maximum number of failed authentication attempts by setting
|
||||
the ``lockout_failure_attempts``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[security_compliance]
|
||||
lockout_failure_attempts = 6
|
||||
|
||||
You set the number of minutes a user would be locked out by setting
|
||||
the ``lockout_duration`` in seconds:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[security_compliance]
|
||||
lockout_duration = 1800
|
||||
|
||||
If you do not set the ``lockout_duration``, users may be locked out
|
||||
indefinitely until the user is explicitly enabled via the API.
|
||||
|
||||
Disabling inactive users
|
||||
------------------------
|
||||
|
||||
PCI-DSS 8.1.4 requires that inactive user accounts be removed or disabled
|
||||
within 90 days. You can achieve this by setting the
|
||||
``disable_user_account_days_inactive``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[security_compliance]
|
||||
disable_user_account_days_inactive = 90
|
||||
|
||||
This above example means that users that have not authenticated (inactive) for
|
||||
the past 90 days are automatically disabled. Users can be re-enabled by
|
||||
explicitly setting the enable user attribute via the API.
|
||||
|
||||
Configuring password expiration
|
||||
-------------------------------
|
||||
|
||||
Passwords can be configured to expire within a certain number of days by
|
||||
setting the ``password_expires_days``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[security_compliance]
|
||||
password_expires_days = 90
|
||||
|
||||
Once set, any new password changes have an expiration date based on the
|
||||
date/time of the password change plus the number of days defined here. Existing
|
||||
passwords will not be impacted. If you want existing passwords to have an
|
||||
expiration date, you would need to run a SQL script against the password table
|
||||
in the database to update the expires_at column.
|
||||
|
||||
In addition, you can set it so that passwords never expire for some users by
|
||||
adding their user ID to ``password_expires_ignore_user_ids`` list:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[security_compliance]
|
||||
password_expires_ignore_user_ids = [3a54353c9dcc44f690975ea768512f6a]
|
||||
|
||||
In this example, the password for user ID ``3a54353c9dcc44f690975ea768512f6a``
|
||||
would never expire.
|
||||
|
||||
Indicating password strength requirements
|
||||
-----------------------------------------
|
||||
|
||||
You set password strength requirements, such as requiring numbers in passwords
|
||||
or setting a minimum password length, by adding a regular expression to the
|
||||
``password_regex``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[security_compliance]
|
||||
password_regex = ^(?=.*\d)(?=.*[a-zA-Z]).{7,}$
|
||||
|
||||
The above example is a regular expression that requires a password to have
|
||||
one letter, one digit, and a minimum length of seven characters.
|
||||
|
||||
If you do set the ``password_regex``, you should provide text that
|
||||
describes your password strength requirements. You can do this by setting the
|
||||
``password_regex_description``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[security_compliance]
|
||||
password_regex_description = Passwords must contain at least 1 letter, 1
|
||||
digit, and be a minimum length of 7
|
||||
characters.
|
||||
|
||||
The service returns that description to users to explain why their requested
|
||||
password did not meet requirements.
|
||||
|
||||
.. note::
|
||||
|
||||
You must ensure the ``password_regex_description`` accurately and
|
||||
completely describes the ``password_regex``. If the two options are out of
|
||||
sync, the help text could inaccurately describe the password requirements
|
||||
being applied to the password. This would lead to poor user experience.
|
||||
|
||||
Requiring a unique password history
|
||||
-----------------------------------
|
||||
|
||||
The password history requirements controls the number of passwords for a user
|
||||
that must be unique before an old password can be reused. You can enforce this
|
||||
by setting the ``unique_last_password_count``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[security_compliance]
|
||||
unique_last_password_count= 5
|
||||
|
||||
The above example does not allow a user to create a new password that is the
|
||||
same as any of their last four previous passwords.
|
||||
|
||||
Similarly, you can set the number of days that a password must be used before
|
||||
the user can change it by setting the ``minimum_password_age``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[security_compliance]
|
||||
minimum_password_age = 1
|
||||
|
||||
In the above example, once a user changes their password, they would not be
|
||||
able to change it again for one day. This prevents users from changing their
|
||||
passwords immediately in order to wipe out their password history and reuse an
|
||||
old password.
|
||||
|
||||
.. note::
|
||||
|
||||
When you set ``password_expires_days``, the value for the
|
||||
``minimum_password_age`` should be less than the ``password_expires_days``.
|
||||
Otherwise, users would not be able to change their passwords before they
|
||||
expire.
|
||||
|
||||
.. _Security Hardening PCI-DSS: https://specs.openstack.org/openstack/keystone-specs/specs/keystone/newton/pci-dss.html
|
||||
|
||||
|
||||
.. _v2.0: https://developer.openstack.org/api-ref/identity/v2-admin/index.html?expanded=update-user-admin-endpoint-detail#update-user-admin-endpoint
|
||||
|
||||
.. _v3: https://developer.openstack.org/api-ref/identity/v3/index.html#update-user
|
@ -1,128 +0,0 @@
|
||||
=============================================================
|
||||
Identity API protection with role-based access control (RBAC)
|
||||
=============================================================
|
||||
|
||||
Like most OpenStack projects, Identity supports the protection of its
|
||||
APIs by defining policy rules based on an RBAC approach. Identity stores
|
||||
a reference to a policy JSON file in the main Identity configuration
|
||||
file, ``/etc/keystone/keystone.conf``. Typically this file is named
|
||||
``policy.json``, and contains the rules for which roles have access to
|
||||
certain actions in defined services.
|
||||
|
||||
Each Identity API v3 call has a line in the policy file that dictates
|
||||
which level of governance of access applies.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
API_NAME: RULE_STATEMENT or MATCH_STATEMENT
|
||||
|
||||
Where:
|
||||
|
||||
``RULE_STATEMENT`` can contain ``RULE_STATEMENT`` or
|
||||
``MATCH_STATEMENT``.
|
||||
|
||||
``MATCH_STATEMENT`` is a set of identifiers that must match between the
|
||||
token provided by the caller of the API and the parameters or target
|
||||
entities of the API call in question. For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
"identity:create_user": "role:admin and domain_id:%(user.domain_id)s"
|
||||
|
||||
Indicates that to create a user, you must have the admin role in your
|
||||
token. The ``domain_id`` in your token must match the
|
||||
``domain_id`` in the user object that you are trying
|
||||
to create, which implies this must be a domain-scoped token.
|
||||
In other words, you must have the admin role on the domain
|
||||
in which you are creating the user, and the token that you use
|
||||
must be scoped to that domain.
|
||||
|
||||
Each component of a match statement uses this format:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
ATTRIB_FROM_TOKEN:CONSTANT or ATTRIB_RELATED_TO_API_CALL
|
||||
|
||||
The Identity service expects these attributes:
|
||||
|
||||
Attributes from token:
|
||||
|
||||
- ``user_id``
|
||||
- ``domain_id``
|
||||
- ``project_id``
|
||||
|
||||
The ``project_id`` attribute requirement depends on the scope, and the
|
||||
list of roles you have within that scope.
|
||||
|
||||
Attributes related to API call:
|
||||
|
||||
- ``user.domain_id``
|
||||
- Any parameters passed into the API call
|
||||
- Any filters specified in the query string
|
||||
|
||||
You reference attributes of objects passed with an object.attribute
|
||||
syntax (such as, ``user.domain_id``). The target objects of an API are
|
||||
also available using a target.object.attribute syntax. For instance:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
"identity:delete_user": "role:admin and domain_id:%(target.user.domain_id)s"
|
||||
|
||||
would ensure that Identity only deletes the user object in the same
|
||||
domain as the provided token.
|
||||
|
||||
Every target object has an ``id`` and a ``name`` available as
|
||||
``target.OBJECT.id`` and ``target.OBJECT.name``. Identity retrieves
|
||||
other attributes from the database, and the attributes vary between
|
||||
object types. The Identity service filters out some database fields,
|
||||
such as user passwords.
|
||||
|
||||
List of object attributes:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
role:
|
||||
target.role.id
|
||||
target.role.name
|
||||
|
||||
user:
|
||||
target.user.default_project_id
|
||||
target.user.description
|
||||
target.user.domain_id
|
||||
target.user.enabled
|
||||
target.user.id
|
||||
target.user.name
|
||||
|
||||
group:
|
||||
target.group.description
|
||||
target.group.domain_id
|
||||
target.group.id
|
||||
target.group.name
|
||||
|
||||
domain:
|
||||
target.domain.enabled
|
||||
target.domain.id
|
||||
target.domain.name
|
||||
|
||||
project:
|
||||
target.project.description
|
||||
target.project.domain_id
|
||||
target.project.enabled
|
||||
target.project.id
|
||||
target.project.name
|
||||
|
||||
The default ``policy.json`` file supplied provides a somewhat
|
||||
basic example of API protection, and does not assume any particular
|
||||
use of domains. Refer to ``policy.v3cloudsample.json`` as an
|
||||
example of multi-domain configuration installations where a cloud
|
||||
provider wants to delegate administration of the contents of a domain
|
||||
to a particular ``admin domain``. This example policy file also
|
||||
shows the use of an ``admin_domain`` to allow a cloud provider to
|
||||
enable administrators to have wider access across the APIs.
|
||||
|
||||
A clean installation could start with the standard policy file, to
|
||||
allow creation of the ``admin_domain`` with the first users within
|
||||
it. You could then obtain the ``domain_id`` of the admin domain,
|
||||
paste the ID into a modified version of
|
||||
``policy.v3cloudsample.json``, and then enable it as the main
|
||||
``policy file``.
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user