More install doc changes; data and sriov interfaces.

Lots of changes, mostly around configuration of data and
pci-sriov interfaces.

Change-Id: Ib1ca186bd150c4c58d7ff3320d04dd4af25826c1
This commit is contained in:
Greg Waines 2021-05-16 18:19:16 -04:00
parent 04dd295ee9
commit c8e69817aa
5 changed files with 962 additions and 689 deletions

View File

@ -77,81 +77,7 @@ Configure worker nodes
system interface-network-assign $NODE mgmt0 cluster-host
done
#. Configure data interfaces for worker nodes. Use the DATA port names, for
example eth0, that are applicable to your deployment environment.
This step is optional for Kubernetes: Do this step if using |SRIOV| network
attachments in hosted application containers.
.. only:: starlingx
.. important::
This step is **required** for OpenStack.
* Configure the data interfaces
.. code-block:: bash
# Execute the following lines with
export NODE=worker-0
# and then repeat with
export NODE=worker-1
# List inventoried hosts ports and identify ports to be used as data interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Previously configured Data Networks
PHYSNET0='physnet0'
PHYSNET1='physnet1'
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan
# Assign Data Networks to Data Interfaces
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${PHYSNET0}
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${PHYSNET1}
* To enable using |SRIOV| network attachments for the above interfaces in
Kubernetes hosted application containers:
* Configure |SRIOV| device plug in:
::
for NODE in worker-0 worker-1; do
system host-label-assign $NODE sriovdp=enabled
done
* If planning on running |DPDK| in containers on this host, configure the
number of 1G Huge pages required on both |NUMA| nodes:
::
for NODE in worker-0 worker-1; do
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
system host-memory-modify -f application $NODE 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
system host-memory-modify -f application $NODE 1 -1G 10
done
.. only:: starlingx
.. only:: openstack
*************************************
OpenStack-specific host configuration
@ -159,7 +85,7 @@ Configure worker nodes
.. important::
**This step is required only if the StarlingX OpenStack application
**These steps are required only if the StarlingX OpenStack application
(stx-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
@ -248,6 +174,125 @@ Configure worker nodes
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
done
#. **For OpenStack only:** Configure data interfaces for worker nodes.
Data class interfaces are vswitch interfaces used by vswitch to provide
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
underlying assigned Data Network.
.. important::
A compute-labeled worker host **MUST** have at least one Data class interface.
* Configure the data interfaces for worker nodes.
::
# Execute the following lines with
export NODE=worker-0
# and then repeat with
export NODE=worker-1
# List inventoried hosts ports and identify ports to be used as data interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Create Data Networks that vswitch 'data' interfaces will be connected to
DATANET0='datanet0'
DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to Data Interfaces
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
*****************************************
Optionally Configure PCI-SRIOV Interfaces
*****************************************
#. **Optionally**, configure pci-sriov interfaces for worker nodes.
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
network attachments in hosted application containers.
.. only:: openstack
This step is **optional** for OpenStack. Do this step if using |SRIOV|
vNICs in hosted application VMs. Note that pci-sriov interfaces can
have the same Data Networks assigned to them as vswitch data interfaces.
* Configure the pci-sriov interfaces for worker nodes.
::
# Execute the following lines with
export NODE=worker-0
# and then repeat with
export NODE=worker-1
# List inventoried hosts ports and identify ports to be used as pci-sriov interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as pci-sriov class interfaces, MTU of 1500 and named sriov#
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
DATANET0='datanet0'
DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to PCI-SRIOV Interfaces
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
* To enable using |SRIOV| network attachments for the above interfaces in
Kubernetes hosted application containers:
* Configure the Kubernetes |SRIOV| device plugin.
::
for NODE in worker-0 worker-1; do
system host-label-assign $NODE sriovdp=enabled
done
* If planning on running |DPDK| in Kubernetes hosted application
containers on this host, configure the number of 1G Huge pages required
on both |NUMA| nodes.
::
for NODE in worker-0 worker-1; do
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
system host-memory-modify -f application $NODE 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
system host-memory-modify -f application $NODE 1 -1G 10
done
-------------------
Unlock worker nodes

View File

@ -255,130 +255,6 @@ Configure controller-0
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
#. Configure data interfaces for controller-0. Use the DATA port names, for example
eth0, applicable to your deployment environment.
This step is optional for Kubernetes: Do this step if using |SRIOV| network
attachments in hosted application containers.
.. only:: starlingx
.. important::
This step is **required** for OpenStack.
* Configure the data interfaces
::
export NODE=controller-0
# List inventoried hosts ports and identify ports to be used as data interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Create Data Networks
PHYSNET0='physnet0'
PHYSNET1='physnet1'
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan
# Assign Data Networks to Data Interfaces
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${PHYSNET0}
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${PHYSNET1}
* To enable using |SRIOV| network attachments for the above interfaces in
Kubernetes hosted application containers:
* Configure the Kubernetes |SRIOV| device plugin.
::
system host-label-assign controller-0 sriovdp=enabled
* If planning on running |DPDK| in kuberentes hosted appliction containers
on this host, configure the number of 1G Huge pages required on both
|NUMA| nodes.
::
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
system host-memory-modify -f application controller-0 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
system host-memory-modify -f application controller-0 1 -1G 10
***************************************************************
If required, initialize a Ceph-based Persistent Storage Backend
***************************************************************
A persistent storage backend is required if your application requires |PVCs|.
.. only:: starlingx
.. important::
The StarlingX OpenStack application **requires** |PVCs|.
There are two options for persistent storage backend: the host-based Ceph
solution and the Rook container-based Ceph solution.
For host-based Ceph:
#. Initialize with add ceph backend:
::
system storage-backend-add ceph --confirmed
#. Add an |OSD| on controller-0 for host-based Ceph:
.. code-block:: bash
# List hosts disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
# By default, /dev/sda is being used as system disk and can not be used for OSD.
system host-disk-list controller-0
# Add disk as an OSD storage
system host-stor-add controller-0 osd <disk-uuid>
# List OSD storage devices
system host-stor-list controller-0
# Add disk as an OSD storage
system host-stor-add controller-0 osd <disk-uuid>
.. only:: starlingx
For Rook container-based Ceph:
#. Initialize with add ceph-rook backend:
::
system storage-backend-add ceph-rook --confirmed
#. Assign Rook host labels to controller-0 in support of installing the
rook-ceph-apps manifest/helm-charts later:
::
system host-label-assign controller-0 ceph-mon-placement=enabled
system host-label-assign controller-0 ceph-mgr-placement=enabled
.. only:: openstack
*************************************
@ -387,7 +263,7 @@ For host-based Ceph:
.. important::
**This step is required only if the StarlingX OpenStack application
**These steps are required only if the StarlingX OpenStack application
(stx-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
@ -498,6 +374,174 @@ For host-based Ceph:
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
sleep 2
#. **For OpenStack only:** Configure data interfaces for controller-0.
Data class interfaces are vswitch interfaces used by vswitch to provide
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
underlying assigned Data Network.
.. important::
A compute-labeled All-in-one controller host **MUST** have at least one Data class interface.
* Configure the data interfaces for controller-0.
::
export NODE=controller-0
# List inventoried hosts ports and identify ports to be used as data interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Create Data Networks that vswitch 'data' interfaces will be connected to
DATANET0='datanet0'
DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to Data Interfaces
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
*****************************************
Optionally Configure PCI-SRIOV Interfaces
*****************************************
#. **Optionally**, configure pci-sriov interfaces for controller-0.
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
network attachments in hosted application containers.
.. only:: openstack
This step is **optional** for OpenStack. Do this step if using |SRIOV|
vNICs in hosted application VMs. Note that pci-sriov interfaces can
have the same Data Networks assigned to them as vswitch data interfaces.
* Configure the pci-sriov interfaces for controller-0.
::
export NODE=controller-0
# List inventoried hosts ports and identify ports to be used as pci-sriov interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as pci-sriov class interfaces, MTU of 1500 and named sriov#
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
DATANET0='datanet0'
DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to PCI-SRIOV Interfaces
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
* To enable using |SRIOV| network attachments for the above interfaces in
Kubernetes hosted application containers:
* Configure the Kubernetes |SRIOV| device plugin.
::
system host-label-assign controller-0 sriovdp=enabled
* If planning on running |DPDK| in Kubernetes hosted application
containers on this host, configure the number of 1G Huge pages required
on both |NUMA| nodes.
::
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
system host-memory-modify -f application controller-0 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
system host-memory-modify -f application controller-0 1 -1G 10
***************************************************************
If required, initialize a Ceph-based Persistent Storage Backend
***************************************************************
A persistent storage backend is required if your application requires |PVCs|.
.. only:: openstack
.. important::
The StarlingX OpenStack application **requires** |PVCs|.
.. only:: starlingx
There are two options for persistent storage backend: the host-based Ceph
solution and the Rook container-based Ceph solution.
For host-based Ceph:
#. Initialize with add ceph backend:
::
system storage-backend-add ceph --confirmed
#. Add an |OSD| on controller-0 for host-based Ceph:
.. code-block:: bash
# List hosts disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
# By default, /dev/sda is being used as system disk and can not be used for OSD.
system host-disk-list controller-0
# Add disk as an OSD storage
system host-stor-add controller-0 osd <disk-uuid>
# List OSD storage devices
system host-stor-list controller-0
# Add disk as an OSD storage
system host-stor-add controller-0 osd <disk-uuid>
.. only:: starlingx
For Rook container-based Ceph:
#. Initialize with add ceph-rook backend:
::
system storage-backend-add ceph-rook --confirmed
#. Assign Rook host labels to controller-0 in support of installing the
rook-ceph-apps manifest/helm-charts later:
::
system host-label-assign controller-0 ceph-mon-placement=enabled
system host-label-assign controller-0 ceph-mgr-placement=enabled
-------------------
Unlock controller-0
@ -579,107 +623,7 @@ Configure controller-1
system interface-network-assign controller-1 mgmt0 cluster-host
#. Configure data interfaces for controller-1. Use the DATA port names, for
example eth0, applicable to your deployment environment.
This step is optional for Kubernetes. Do this step if using |SRIOV|
network attachments in hosted application containers.
.. only:: starlingx
.. important::
This step is **required** for OpenStack.
* Configure the data interfaces
::
export NODE=controller-1
# List inventoried hosts ports and identify ports to be used as data interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Previouly created Data Networks
PHYSNET0='physnet0'
PHYSNET1='physnet1'
# Assign Data Networks to Data Interfaces
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${PHYSNET0}
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${PHYSNET1}
* To enable using |SRIOV| network attachments for the above interfaes in
Kubernetes hosted application containers:
* Configure the Kubernetes |SRIOV| device plugin:
::
system host-label-assign controller-1 sriovdp=enabled
* If planning on running |DPDK| in Kubernetes hosted application containers
on this host, configure the number of 1G Huge pages required on both
|NUMA| nodes:
::
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
system host-memory-modify -f application controller-1 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
system host-memory-modify -f application controller-1 1 -1G 10
***************************************************************************************
If configuring a Ceph-based Persistent Storage Backend, configure host-specific details
***************************************************************************************
For host-based Ceph:
#. Add an |OSD| on controller-1 for host-based Ceph:
::
# List hosts disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
# By default, /dev/sda is being used as system disk and can not be used for OSD.
system host-disk-list controller-0
# Add disk as an OSD storage
system host-stor-add controller-0 osd <disk-uuid>
# List OSD storage devices
system host-stor-list controller-0
# Add disk as an OSD storage
system host-stor-add controller-0 osd <disk-uuid>
.. only:: starlingx
For Rook container-based Ceph:
#. Assign Rook host labels to controller-1 in support of installing the
rook-ceph-apps manifest/helm-charts later:
::
system host-label-assign controller-1 ceph-mon-placement=enabled
system host-label-assign controller-1 ceph-mgr-placement=enabled
.. only:: openstack
*************************************
OpenStack-specific host configuration
@ -687,7 +631,7 @@ For host-based Ceph:
.. important::
**This step is required only if the StarlingX OpenStack application
**These steps are required only if the StarlingX OpenStack application
(stx-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
@ -762,6 +706,151 @@ For host-based Ceph:
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
sleep 2
#. **For OpenStack only:** Configure data interfaces for controller-1.
Data class interfaces are vswitch interfaces used by vswitch to provide
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
underlying assigned Data Network.
.. important::
A compute-labeled All-in-one controller host **MUST** have at least one Data class interface.
* Configure the data interfaces for controller-1.
::
export NODE=controller-1
# List inventoried hosts ports and identify ports to be used as data interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Create Data Networks that vswitch 'data' interfaces will be connected to
DATANET0='datanet0'
DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to Data Interfaces
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
*****************************************
Optionally Configure PCI-SRIOV Interfaces
*****************************************
#. **Optionally**, configure pci-sriov interfaces for controller-1.
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
network attachments in hosted application containers.
.. only:: openstack
This step is **optional** for OpenStack. Do this step if using |SRIOV|
vNICs in hosted application VMs. Note that pci-sriov interfaces can
have the same Data Networks assigned to them as vswitch data interfaces.
* Configure the pci-sriov interfaces for controller-1.
::
export NODE=controller-1
# List inventoried hosts ports and identify ports to be used as pci-sriov interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as pci-sriov class interfaces, MTU of 1500 and named sriov#
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
DATANET0='datanet0'
DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to PCI-SRIOV Interfaces
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
* To enable using |SRIOV| network attachments for the above interfaces in
Kubernetes hosted application containers:
* Configure the Kubernetes |SRIOV| device plugin.
::
system host-label-assign controller-1 sriovdp=enabled
* If planning on running |DPDK| in Kubernetes hosted application
containers on this host, configure the number of 1G Huge pages required
on both |NUMA| nodes.
::
# assign 10x 1G huge page on processor/numa-node 0 on controller-1 to applications
system host-memory-modify -f application controller-1 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on controller-1 to applications
system host-memory-modify -f application controller-1 1 -1G 10
***************************************************************************************
If configuring a Ceph-based Persistent Storage Backend, configure host-specific details
***************************************************************************************
For host-based Ceph:
#. Add an |OSD| on controller-1 for host-based Ceph:
::
# List hosts disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
# By default, /dev/sda is being used as system disk and can not be used for OSD.
system host-disk-list controller-1
# Add disk as an OSD storage
system host-stor-add controller-1 osd <disk-uuid>
# List OSD storage devices
system host-stor-list controller-1
# Add disk as an OSD storage
system host-stor-add controller-1 osd <disk-uuid>
.. only:: starlingx
For Rook container-based Ceph:
#. Assign Rook host labels to controller-1 in support of installing the
rook-ceph-apps manifest/helm-charts later:
::
system host-label-assign controller-1 ceph-mon-placement=enabled
system host-label-assign controller-1 ceph-mgr-placement=enabled
-------------------
Unlock controller-1
-------------------

View File

@ -236,130 +236,6 @@ The newly installed controller needs to be configured.
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
#. Configure data interfaces for controller-0. Use the DATA port names, for example
eth0, applicable to your deployment environment.
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
network attachments in hosted application containers.
.. only:: starlingx
.. important::
This step is **required** for OpenStack.
* Configure the data interfaces.
::
export NODE=controller-0
# List inventoried hosts ports and identify ports to be used as data interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Create Data Networks
PHYSNET0='physnet0'
PHYSNET1='physnet1'
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan
# Assign Data Networks to Data Interfaces
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${PHYSNET0}
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${PHYSNET1}
* To enable using |SRIOV| network attachments for the above interfaces in
Kubernetes hosted application containers:
* Configure the Kubernetes |SRIOV| device plugin.
::
system host-label-assign controller-0 sriovdp=enabled
* If planning on running |DPDK| in Kubernetes hosted application
containers on this host, configure the number of 1G Huge pages required
on both |NUMA| nodes.
::
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
system host-memory-modify -f application controller-0 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
system host-memory-modify -f application controller-0 1 -1G 10
***************************************************************
If required, initialize a Ceph-based Persistent Storage Backend
***************************************************************
A persistent storage backend is required if your application requires
|PVCs|.
.. only:: starlingx
.. important::
The StarlingX OpenStack application **requires** |PVCs|.
There are two options for persistent storage backend: the host-based Ceph
solution and the Rook container-based Ceph solution.
For host-based Ceph:
#. Add host-based Ceph backend:
::
system storage-backend-add ceph --confirmed
#. Add an |OSD| on controller-0 for host-based Ceph:
.. code-block:: bash
# List hosts disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
# By default, /dev/sda is being used as system disk and can not be used for OSD.
system host-disk-list controller-0
# Add disk as an OSD storage
system host-stor-add controller-0 osd <disk-uuid>
# List OSD storage devices
system host-stor-list controller-0
.. only:: starlingx
For Rook container-based Ceph:
#. Add Rook container-based backend:
::
system storage-backend-add ceph-rook --confirmed
#. Assign Rook host labels to controller-0 in support of installing the
rook-ceph-apps manifest/helm-charts later:
::
system host-label-assign controller-0 ceph-mon-placement=enabled
system host-label-assign controller-0 ceph-mgr-placement=enabled
.. only:: openstack
*************************************
@ -370,7 +246,7 @@ For host-based Ceph:
.. important::
**This step is required only if the StarlingX OpenStack application
**These steps are required only if the StarlingX OpenStack application
(stx-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
@ -472,6 +348,175 @@ For host-based Ceph:
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
sleep 2
#. **For OpenStack only:** Configure data interfaces for controller-0.
Data class interfaces are vswitch interfaces used by vswitch to provide
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
underlying assigned Data Network.
.. important::
A compute-labeled worker host **MUST** have at least one Data class interface.
* Configure the data interfaces for controller-0.
::
export NODE=controller-0
# List inventoried hosts ports and identify ports to be used as data interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Create Data Networks that vswitch 'data' interfaces will be connected to
DATANET0='datanet0'
DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to Data Interfaces
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
*****************************************
Optionally Configure PCI-SRIOV Interfaces
*****************************************
#. **Optionally**, configure pci-sriov interfaces for controller-0.
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
network attachments in hosted application containers.
.. only:: openstack
This step is **optional** for OpenStack. Do this step if using |SRIOV|
vNICs in hosted application VMs. Note that pci-sriov interfaces can
have the same Data Networks assigned to them as vswitch data interfaces.
* Configure the pci-sriov interfaces for controller-0.
::
export NODE=controller-0
# List inventoried hosts ports and identify ports to be used as pci-sriov interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as pci-sriov class interfaces, MTU of 1500 and named sriov#
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
DATANET0='datanet0'
DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to PCI-SRIOV Interfaces
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
* To enable using |SRIOV| network attachments for the above interfaces in
Kubernetes hosted application containers:
* Configure the Kubernetes |SRIOV| device plugin.
::
system host-label-assign controller-0 sriovdp=enabled
* If planning on running |DPDK| in Kubernetes hosted application
containers on this host, configure the number of 1G Huge pages required
on both |NUMA| nodes.
::
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
system host-memory-modify -f application controller-0 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
system host-memory-modify -f application controller-0 1 -1G 10
***************************************************************
If required, initialize a Ceph-based Persistent Storage Backend
***************************************************************
A persistent storage backend is required if your application requires
|PVCs|.
.. only:: openstack
.. important::
The StarlingX OpenStack application **requires** |PVCs|.
.. only:: starlingx
There are two options for persistent storage backend: the host-based Ceph
solution and the Rook container-based Ceph solution.
For host-based Ceph:
#. Add host-based Ceph backend:
::
system storage-backend-add ceph --confirmed
#. Add an |OSD| on controller-0 for host-based Ceph:
.. code-block:: bash
# List hosts disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
# By default, /dev/sda is being used as system disk and can not be used for OSD.
system host-disk-list controller-0
# Add disk as an OSD storage
system host-stor-add controller-0 osd <disk-uuid>
# List OSD storage devices
system host-stor-list controller-0
.. only:: starlingx
For Rook container-based Ceph:
#. Add Rook container-based backend:
::
system storage-backend-add ceph-rook --confirmed
#. Assign Rook host labels to controller-0 in support of installing the
rook-ceph-apps manifest/helm-charts later:
::
system host-label-assign controller-0 ceph-mon-placement=enabled
system host-label-assign controller-0 ceph-mgr-placement=enabled
.. incl-config-controller-0-openstack-specific-aio-simplex-end:

View File

@ -258,16 +258,15 @@ Configure controller-0
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
#. Configure Ceph storage backend:
#. If required, configure Ceph storage backend:
This step is required only if your application requires persistent storage.
A persistent storage backend is required if your application requires |PVCs|.
.. only:: starlingx
.. only:: openstack
.. important::
**If you want to install the StarlingX Openstack application
(stx-openstack), this step is mandatory.**
The StarlingX OpenStack application **requires** |PVCs|.
::
@ -281,7 +280,7 @@ Configure controller-0
.. important::
**This step is required only if the StarlingX OpenStack application
**These steps are required only if the StarlingX OpenStack application
(stx-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
@ -523,19 +522,113 @@ Configure worker nodes
system interface-network-assign $NODE mgmt0 cluster-host
done
#. Configure data interfaces for worker nodes. Use the DATA port names, for
example eth0, that are applicable to your deployment environment.
.. only:: openstack
This step is optional for Kubernetes: Do this step if using |SRIOV| network
attachments in hosted application containers.
.. only:: starlingx
*************************************
OpenStack-specific host configuration
*************************************
.. important::
This step is **required** for OpenStack.
**These steps are required only if the StarlingX OpenStack application
(stx-openstack) will be installed.**
* Configure the data interfaces
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
support of installing the stx-openstack manifest and helm-charts later.
::
for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled
system host-label-assign $NODE openvswitch=enabled
system host-label-assign $NODE sriov=enabled
done
#. **For OpenStack only:** Configure the host settings for the vSwitch.
**If using OVS-DPDK vswitch, run the following commands:**
Default recommendation for worker node is to use a single core on each
numa-node for |OVS|-|DPDK| vswitch. This should have been automatically
configured, if not run the following command.
::
for NODE in worker-0 worker-1; do
# assign 1 core on processor/numa-node 0 on worker-node to vswitch
system host-cpu-modify -f vswitch -p0 1 $NODE
# assign 1 core on processor/numa-node 1 on worker-node to vswitch
system host-cpu-modify -f vswitch -p1 1 $NODE
done
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on
each |NUMA| node where vswitch is running on this host, with the
following command:
::
for NODE in worker-0 worker-1; do
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
system host-memory-modify -f vswitch -1G 1 $NODE 0
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
system host-memory-modify -f vswitch -1G 1 $NODE 1
done
.. important::
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
this host with the command:
::
for NODE in worker-0 worker-1; do
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
system host-memory-modify -f application -1G 10 $NODE 0
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
system host-memory-modify -f application -1G 10 $NODE 1
done
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
needed for stx-openstack nova ephemeral disks.
::
for NODE in worker-0 worker-1; do
echo "Configuring Nova local for: $NODE"
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
PARTITION_SIZE=10
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${NODE} nova-local
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
done
#. **For OpenStack only:** Configure data interfaces for worker nodes.
Data class interfaces are vswitch interfaces used by vswitch to provide
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
underlying assigned Data Network.
.. important::
A compute-labeled worker host **MUST** have at least one Data class interface.
* Configure the data interfaces for worker nodes.
::
@ -558,29 +651,80 @@ Configure worker nodes
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Create Data Networks
PHYSNET0='physnet0'
PHYSNET1='physnet1'
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan
# Create Data Networks that vswitch 'data' interfaces will be connected to
DATANET0='datanet0'
DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to Data Interfaces
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${PHYSNET0}
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${PHYSNET1}
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
*****************************************
Optionally Configure PCI-SRIOV Interfaces
*****************************************
#. **Optionally**, configure pci-sriov interfaces for worker nodes.
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
network attachments in hosted application containers.
.. only:: openstack
This step is **optional** for OpenStack. Do this step if using |SRIOV|
vNICs in hosted application VMs. Note that pci-sriov interfaces can
have the same Data Networks assigned to them as vswitch data interfaces.
* Configure the pci-sriov interfaces for worker nodes.
::
# Execute the following lines with
export NODE=worker-0
# and then repeat with
export NODE=worker-1
# List inventoried hosts ports and identify ports to be used as pci-sriov interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as pci-sriov class interfaces, MTU of 1500 and named sriov#
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
DATANET0='datanet0'
DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to PCI-SRIOV Interfaces
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
* To enable using |SRIOV| network attachments for the above interfaces in
Kubernetes hosted application containers:
* Configure |SRIOV| device plug in:
* Configure the Kubernetes |SRIOV| device plugin.
::
for NODE in worker-0 worker-1; do
system host-label-assign ${NODE} sriovdp=enabled
system host-label-assign $NODE sriovdp=enabled
done
* If planning on running DPDK in containers on this host, configure the number
of 1G Huge pages required on both |NUMA| nodes:
* If planning on running |DPDK| in Kubernetes hosted application
containers on this host, configure the number of 1G Huge pages required
on both |NUMA| nodes.
::
@ -595,102 +739,6 @@ Configure worker nodes
done
.. only:: openstack
*************************************
OpenStack-specific host configuration
*************************************
.. important::
**This step is required only if the StarlingX OpenStack application
(stx-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
support of installing the stx-openstack manifest and helm-charts later.
::
for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled
system host-label-assign $NODE openvswitch=enabled
system host-label-assign $NODE sriov=enabled
done
#. **For OpenStack only:** Configure the host settings for the vSwitch.
**If using OVS-DPDK vswitch, run the following commands:**
Default recommendation for worker node is to use a single core on each numa-node
for |OVS|-|DPDK| vswitch. This should have been automatically configured,
if not run the following command.
::
for NODE in worker-0 worker-1; do
# assign 1 core on processor/numa-node 0 on worker-node to vswitch
system host-cpu-modify -f vswitch -p0 1 $NODE
# assign 1 core on processor/numa-node 1 on worker-node to vswitch
system host-cpu-modify -f vswitch -p1 1 $NODE
done
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on each |NUMA| node
where vswitch is running on this host, with the following command:
::
for NODE in worker-0 worker-1; do
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
system host-memory-modify -f vswitch -1G 1 $NODE 0
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
system host-memory-modify -f vswitch -1G 1 $NODE 1
done
.. important::
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for this host with
the command:
::
for NODE in worker-0 worker-1; do
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
system host-memory-modify -f application -1G 10 $NODE 0
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
system host-memory-modify -f application -1G 10 $NODE 1
done
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
which is needed for stx-openstack nova ephemeral disks.
::
for NODE in worker-0 worker-1; do
echo "Configuring Nova local for: $NODE"
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
PARTITION_SIZE=10
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${NODE} nova-local
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
done
--------------------
Unlock worker nodes
--------------------
@ -706,49 +754,41 @@ Unlock worker nodes in order to bring them into service:
The worker nodes will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host machine.
----------------------------
Add Ceph OSDs to controllers
----------------------------
-----------------------------------------------------------------
If configuring Ceph Storage Backend, Add Ceph OSDs to controllers
-----------------------------------------------------------------
#. Add |OSDs| to controller-0. The following example adds |OSDs| to the `sdb` disk:
.. important::
This step requires a configured Ceph storage backend.
::
HOST=controller-0
DISKS=$(system host-disk-list ${HOST})
TIERS=$(system storage-tier-list ceph_cluster)
OSDs="/dev/sdb"
for OSD in $OSDs; do
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
done
system host-stor-list $HOST
# List host's disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
# By default, /dev/sda is being used as system disk and can not be used for OSD.
system host-disk-list ${HOST}
# Add disk as an OSD storage
system host-stor-add ${HOST} osd <disk-uuid>
# List OSD storage devices and wait for configuration of newly added OSD to complete.
system host-stor-list ${HOST}
#. Add |OSDs| to controller-1. The following example adds |OSDs| to the `sdb` disk:
.. important::
This step requires a configured Ceph storage backend.
::
HOST=controller-1
DISKS=$(system host-disk-list ${HOST})
TIERS=$(system storage-tier-list ceph_cluster)
OSDs="/dev/sdb"
for OSD in $OSDs; do
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
done
::
# List host's disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
# By default, /dev/sda is being used as system disk and can not be used for OSD.
system host-disk-list ${HOST}
system host-stor-list $HOST
# Add disk as an OSD storage
system host-stor-add ${HOST} osd <disk-uuid>
# List OSD storage devices and wait for configuration of newly added OSD to complete.
system host-stor-list ${HOST}
.. only:: starlingx

View File

@ -245,10 +245,12 @@ host machine.
Configure worker nodes
----------------------
#. Assign the cluster-host network to the MGMT interface for the worker nodes:
#. The MGMT interfaces are partially set up by the network install procedure;
configuring the port used for network install as the MGMT port and
specifying the attached network of "mgmt".
(Note that the MGMT interfaces are partially set up automatically by the
network install procedure.)
Complete the MGMT interface configuration of the worker nodes by specifying
the attached network of "cluster-host".
::
@ -256,75 +258,7 @@ Configure worker nodes
system interface-network-assign $NODE mgmt0 cluster-host
done
#. Configure data interfaces for worker nodes. Use the DATA port names, for
example eth0, that are applicable to your deployment environment.
This step is optional for Kubernetes: Do this step if using |SRIOV| network
attachments in hosted application containers.
.. only:: starlingx
.. important::
This step is **required** for OpenStack.
* Configure the data interfaces.
::
# Execute the following lines with
export NODE=worker-0
# and then repeat with
export NODE=worker-1
# List inventoried hosts ports and identify ports to be used as data interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Create Data Networks
PHYSNET0='physnet0'
PHYSNET1='physnet1'
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan
# Assign Data Networks to Data Interfaces
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${PHYSNET0}
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${PHYSNET1}
* To enable using |SRIOV| network attachments for the above interfaces in
Kubernetes hosted application containers:
* Configure |SRIOV| device plug in:
::
for NODE in worker-0 worker-1; do
system host-label-assign ${NODE} sriovdp=enabled
done
* If planning on running |DPDK| in containers on this host, configure the
number of 1G Huge pages required on both |NUMA| nodes:
::
for NODE in worker-0 worker-1; do
system host-memory-modify ${NODE} 0 -1G 100
system host-memory-modify ${NODE} 1 -1G 100
done
.. only:: starlingx
.. only:: openstack
*************************************
OpenStack-specific host configuration
@ -332,7 +266,7 @@ Configure worker nodes
.. important::
**This step is required only if the StarlingX OpenStack application
**These steps are required only if the StarlingX OpenStack application
(stx-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
@ -405,8 +339,8 @@ Configure worker nodes
done
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
which is needed for stx-openstack nova ephemeral disks.
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
needed for stx-openstack nova ephemeral disks.
::
@ -421,6 +355,126 @@ Configure worker nodes
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
done
#. **For OpenStack only:** Configure data interfaces for worker nodes.
Data class interfaces are vswitch interfaces used by vswitch to provide
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
underlying assigned Data Network.
.. important::
A compute-labeled worker host **MUST** have at least one Data class interface.
* Configure the data interfaces for worker nodes.
::
# Execute the following lines with
export NODE=worker-0
# and then repeat with
export NODE=worker-1
# List inventoried hosts ports and identify ports to be used as data interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Create Data Networks that vswitch 'data' interfaces will be connected to
DATANET0='datanet0'
DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to Data Interfaces
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
*****************************************
Optionally Configure PCI-SRIOV Interfaces
*****************************************
#. **Optionally**, configure pci-sriov interfaces for worker nodes.
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
network attachments in hosted application containers.
.. only:: openstack
This step is **optional** for OpenStack. Do this step if using |SRIOV|
vNICs in hosted application VMs. Note that pci-sriov interfaces can
have the same Data Networks assigned to them as vswitch data interfaces.
* Configure the pci-sriov interfaces for worker nodes.
::
# Execute the following lines with
export NODE=worker-0
# and then repeat with
export NODE=worker-1
# List inventoried hosts ports and identify ports to be used as pci-sriov interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as pci-sriov class interfaces, MTU of 1500 and named sriov#
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
DATANET0='datanet0'
DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to PCI-SRIOV Interfaces
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
* To enable using |SRIOV| network attachments for the above interfaces in
Kubernetes hosted application containers:
* Configure the Kubernetes |SRIOV| device plugin.
::
for NODE in worker-0 worker-1; do
system host-label-assign $NODE sriovdp=enabled
done
* If planning on running |DPDK| in Kubernetes hosted application
containers on this host, configure the number of 1G Huge pages required
on both |NUMA| nodes.
::
for NODE in worker-0 worker-1; do
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
system host-memory-modify -f application $NODE 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
system host-memory-modify -f application $NODE 1 -1G 10
done
-------------------
Unlock worker nodes
-------------------