More install doc changes; data and sriov interfaces.
Lots of changes, mostly around configuration of data and pci-sriov interfaces. Change-Id: Ib1ca186bd150c4c58d7ff3320d04dd4af25826c1
This commit is contained in:
parent
04dd295ee9
commit
c8e69817aa
@ -77,81 +77,7 @@ Configure worker nodes
|
|||||||
system interface-network-assign $NODE mgmt0 cluster-host
|
system interface-network-assign $NODE mgmt0 cluster-host
|
||||||
done
|
done
|
||||||
|
|
||||||
#. Configure data interfaces for worker nodes. Use the DATA port names, for
|
.. only:: openstack
|
||||||
example eth0, that are applicable to your deployment environment.
|
|
||||||
|
|
||||||
This step is optional for Kubernetes: Do this step if using |SRIOV| network
|
|
||||||
attachments in hosted application containers.
|
|
||||||
|
|
||||||
.. only:: starlingx
|
|
||||||
|
|
||||||
.. important::
|
|
||||||
|
|
||||||
This step is **required** for OpenStack.
|
|
||||||
|
|
||||||
|
|
||||||
* Configure the data interfaces
|
|
||||||
|
|
||||||
.. code-block:: bash
|
|
||||||
|
|
||||||
# Execute the following lines with
|
|
||||||
export NODE=worker-0
|
|
||||||
# and then repeat with
|
|
||||||
export NODE=worker-1
|
|
||||||
|
|
||||||
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
|
||||||
# based on displayed linux port name, pci address and device type.
|
|
||||||
system host-port-list ${NODE}
|
|
||||||
|
|
||||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
|
||||||
# find the interfaces corresponding to the ports identified in previous step, and
|
|
||||||
# take note of their UUID
|
|
||||||
system host-if-list -a ${NODE}
|
|
||||||
|
|
||||||
# Modify configuration for these interfaces
|
|
||||||
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
|
||||||
|
|
||||||
# Previously configured Data Networks
|
|
||||||
PHYSNET0='physnet0'
|
|
||||||
PHYSNET1='physnet1'
|
|
||||||
system datanetwork-add ${PHYSNET0} vlan
|
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
|
||||||
|
|
||||||
# Assign Data Networks to Data Interfaces
|
|
||||||
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${PHYSNET0}
|
|
||||||
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${PHYSNET1}
|
|
||||||
|
|
||||||
|
|
||||||
* To enable using |SRIOV| network attachments for the above interfaces in
|
|
||||||
Kubernetes hosted application containers:
|
|
||||||
|
|
||||||
* Configure |SRIOV| device plug in:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
for NODE in worker-0 worker-1; do
|
|
||||||
system host-label-assign $NODE sriovdp=enabled
|
|
||||||
done
|
|
||||||
|
|
||||||
* If planning on running |DPDK| in containers on this host, configure the
|
|
||||||
number of 1G Huge pages required on both |NUMA| nodes:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
for NODE in worker-0 worker-1; do
|
|
||||||
|
|
||||||
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
|
|
||||||
system host-memory-modify -f application $NODE 0 -1G 10
|
|
||||||
|
|
||||||
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
|
|
||||||
system host-memory-modify -f application $NODE 1 -1G 10
|
|
||||||
|
|
||||||
done
|
|
||||||
|
|
||||||
|
|
||||||
.. only:: starlingx
|
|
||||||
|
|
||||||
*************************************
|
*************************************
|
||||||
OpenStack-specific host configuration
|
OpenStack-specific host configuration
|
||||||
@ -159,7 +85,7 @@ Configure worker nodes
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
**This step is required only if the StarlingX OpenStack application
|
**These steps are required only if the StarlingX OpenStack application
|
||||||
(stx-openstack) will be installed.**
|
(stx-openstack) will be installed.**
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||||
@ -248,6 +174,125 @@ Configure worker nodes
|
|||||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
done
|
done
|
||||||
|
|
||||||
|
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
||||||
|
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||||
|
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||||
|
underlying assigned Data Network.
|
||||||
|
|
||||||
|
.. important::
|
||||||
|
|
||||||
|
A compute-labeled worker host **MUST** have at least one Data class interface.
|
||||||
|
|
||||||
|
* Configure the data interfaces for worker nodes.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
# Execute the following lines with
|
||||||
|
export NODE=worker-0
|
||||||
|
# and then repeat with
|
||||||
|
export NODE=worker-1
|
||||||
|
|
||||||
|
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
||||||
|
# based on displayed linux port name, pci address and device type.
|
||||||
|
system host-port-list ${NODE}
|
||||||
|
|
||||||
|
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||||
|
# find the interfaces corresponding to the ports identified in previous step, and
|
||||||
|
# take note of their UUID
|
||||||
|
system host-if-list -a ${NODE}
|
||||||
|
|
||||||
|
# Modify configuration for these interfaces
|
||||||
|
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
||||||
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
||||||
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
||||||
|
|
||||||
|
# Create Data Networks that vswitch 'data' interfaces will be connected to
|
||||||
|
DATANET0='datanet0'
|
||||||
|
DATANET1='datanet1'
|
||||||
|
system datanetwork-add ${DATANET0} vlan
|
||||||
|
system datanetwork-add ${DATANET1} vlan
|
||||||
|
|
||||||
|
# Assign Data Networks to Data Interfaces
|
||||||
|
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
|
||||||
|
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
*****************************************
|
||||||
|
Optionally Configure PCI-SRIOV Interfaces
|
||||||
|
*****************************************
|
||||||
|
|
||||||
|
#. **Optionally**, configure pci-sriov interfaces for worker nodes.
|
||||||
|
|
||||||
|
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
|
||||||
|
network attachments in hosted application containers.
|
||||||
|
|
||||||
|
.. only:: openstack
|
||||||
|
|
||||||
|
This step is **optional** for OpenStack. Do this step if using |SRIOV|
|
||||||
|
vNICs in hosted application VMs. Note that pci-sriov interfaces can
|
||||||
|
have the same Data Networks assigned to them as vswitch data interfaces.
|
||||||
|
|
||||||
|
|
||||||
|
* Configure the pci-sriov interfaces for worker nodes.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
# Execute the following lines with
|
||||||
|
export NODE=worker-0
|
||||||
|
# and then repeat with
|
||||||
|
export NODE=worker-1
|
||||||
|
|
||||||
|
# List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces,
|
||||||
|
# based on displayed linux port name, pci address and device type.
|
||||||
|
system host-port-list ${NODE}
|
||||||
|
|
||||||
|
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||||
|
# find the interfaces corresponding to the ports identified in previous step, and
|
||||||
|
# take note of their UUID
|
||||||
|
system host-if-list -a ${NODE}
|
||||||
|
|
||||||
|
# Modify configuration for these interfaces
|
||||||
|
# Configuring them as ‘pci-sriov’ class interfaces, MTU of 1500 and named sriov#
|
||||||
|
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
||||||
|
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
||||||
|
|
||||||
|
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
|
||||||
|
DATANET0='datanet0'
|
||||||
|
DATANET1='datanet1'
|
||||||
|
system datanetwork-add ${DATANET0} vlan
|
||||||
|
system datanetwork-add ${DATANET1} vlan
|
||||||
|
|
||||||
|
# Assign Data Networks to PCI-SRIOV Interfaces
|
||||||
|
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
|
||||||
|
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
|
||||||
|
* To enable using |SRIOV| network attachments for the above interfaces in
|
||||||
|
Kubernetes hosted application containers:
|
||||||
|
|
||||||
|
* Configure the Kubernetes |SRIOV| device plugin.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
for NODE in worker-0 worker-1; do
|
||||||
|
system host-label-assign $NODE sriovdp=enabled
|
||||||
|
done
|
||||||
|
|
||||||
|
* If planning on running |DPDK| in Kubernetes hosted application
|
||||||
|
containers on this host, configure the number of 1G Huge pages required
|
||||||
|
on both |NUMA| nodes.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
for NODE in worker-0 worker-1; do
|
||||||
|
|
||||||
|
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
|
||||||
|
system host-memory-modify -f application $NODE 0 -1G 10
|
||||||
|
|
||||||
|
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
|
||||||
|
system host-memory-modify -f application $NODE 1 -1G 10
|
||||||
|
|
||||||
|
done
|
||||||
|
|
||||||
|
|
||||||
-------------------
|
-------------------
|
||||||
Unlock worker nodes
|
Unlock worker nodes
|
||||||
|
@ -151,20 +151,20 @@ Bootstrap system on controller-0
|
|||||||
.. code-block::
|
.. code-block::
|
||||||
|
|
||||||
docker_registries:
|
docker_registries:
|
||||||
quay.io:
|
quay.io:
|
||||||
url: myprivateregistry.abc.com:9001/quay.io
|
url: myprivateregistry.abc.com:9001/quay.io
|
||||||
docker.elastic.co:
|
docker.elastic.co:
|
||||||
url: myprivateregistry.abc.com:9001/docker.elastic.co
|
url: myprivateregistry.abc.com:9001/docker.elastic.co
|
||||||
gcr.io:
|
gcr.io:
|
||||||
url: myprivateregistry.abc.com:9001/gcr.io
|
url: myprivateregistry.abc.com:9001/gcr.io
|
||||||
k8s.gcr.io:
|
k8s.gcr.io:
|
||||||
url: myprivateregistry.abc.com:9001/k8s.gcr.io
|
url: myprivateregistry.abc.com:9001/k8s.gcr.io
|
||||||
docker.io:
|
docker.io:
|
||||||
url: myprivateregistry.abc.com:9001/docker.io
|
url: myprivateregistry.abc.com:9001/docker.io
|
||||||
defaults:
|
defaults:
|
||||||
type: docker
|
type: docker
|
||||||
username: <your_myprivateregistry.abc.com_username>
|
username: <your_myprivateregistry.abc.com_username>
|
||||||
password: <your_myprivateregistry.abc.com_password>
|
password: <your_myprivateregistry.abc.com_password>
|
||||||
|
|
||||||
# Add the CA Certificate that signed myprivateregistry.abc.com’s
|
# Add the CA Certificate that signed myprivateregistry.abc.com’s
|
||||||
# certificate as a Trusted CA
|
# certificate as a Trusted CA
|
||||||
@ -255,130 +255,6 @@ Configure controller-0
|
|||||||
|
|
||||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||||
|
|
||||||
#. Configure data interfaces for controller-0. Use the DATA port names, for example
|
|
||||||
eth0, applicable to your deployment environment.
|
|
||||||
|
|
||||||
This step is optional for Kubernetes: Do this step if using |SRIOV| network
|
|
||||||
attachments in hosted application containers.
|
|
||||||
|
|
||||||
.. only:: starlingx
|
|
||||||
|
|
||||||
.. important::
|
|
||||||
|
|
||||||
This step is **required** for OpenStack.
|
|
||||||
|
|
||||||
|
|
||||||
* Configure the data interfaces
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
export NODE=controller-0
|
|
||||||
|
|
||||||
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
|
||||||
# based on displayed linux port name, pci address and device type.
|
|
||||||
system host-port-list ${NODE}
|
|
||||||
|
|
||||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
|
||||||
# find the interfaces corresponding to the ports identified in previous step, and
|
|
||||||
# take note of their UUID
|
|
||||||
system host-if-list -a ${NODE}
|
|
||||||
|
|
||||||
# Modify configuration for these interfaces
|
|
||||||
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
|
||||||
|
|
||||||
# Create Data Networks
|
|
||||||
PHYSNET0='physnet0'
|
|
||||||
PHYSNET1='physnet1'
|
|
||||||
system datanetwork-add ${PHYSNET0} vlan
|
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
|
||||||
|
|
||||||
# Assign Data Networks to Data Interfaces
|
|
||||||
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${PHYSNET0}
|
|
||||||
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${PHYSNET1}
|
|
||||||
|
|
||||||
* To enable using |SRIOV| network attachments for the above interfaces in
|
|
||||||
Kubernetes hosted application containers:
|
|
||||||
|
|
||||||
* Configure the Kubernetes |SRIOV| device plugin.
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
system host-label-assign controller-0 sriovdp=enabled
|
|
||||||
|
|
||||||
* If planning on running |DPDK| in kuberentes hosted appliction containers
|
|
||||||
on this host, configure the number of 1G Huge pages required on both
|
|
||||||
|NUMA| nodes.
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
|
||||||
system host-memory-modify -f application controller-0 0 -1G 10
|
|
||||||
|
|
||||||
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
|
|
||||||
system host-memory-modify -f application controller-0 1 -1G 10
|
|
||||||
|
|
||||||
|
|
||||||
***************************************************************
|
|
||||||
If required, initialize a Ceph-based Persistent Storage Backend
|
|
||||||
***************************************************************
|
|
||||||
|
|
||||||
A persistent storage backend is required if your application requires |PVCs|.
|
|
||||||
|
|
||||||
.. only:: starlingx
|
|
||||||
|
|
||||||
.. important::
|
|
||||||
|
|
||||||
The StarlingX OpenStack application **requires** |PVCs|.
|
|
||||||
|
|
||||||
There are two options for persistent storage backend: the host-based Ceph
|
|
||||||
solution and the Rook container-based Ceph solution.
|
|
||||||
|
|
||||||
For host-based Ceph:
|
|
||||||
|
|
||||||
#. Initialize with add ceph backend:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
system storage-backend-add ceph --confirmed
|
|
||||||
|
|
||||||
#. Add an |OSD| on controller-0 for host-based Ceph:
|
|
||||||
|
|
||||||
.. code-block:: bash
|
|
||||||
|
|
||||||
# List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
|
||||||
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
|
||||||
system host-disk-list controller-0
|
|
||||||
|
|
||||||
# Add disk as an OSD storage
|
|
||||||
system host-stor-add controller-0 osd <disk-uuid>
|
|
||||||
|
|
||||||
# List OSD storage devices
|
|
||||||
system host-stor-list controller-0
|
|
||||||
|
|
||||||
|
|
||||||
# Add disk as an OSD storage
|
|
||||||
system host-stor-add controller-0 osd <disk-uuid>
|
|
||||||
|
|
||||||
.. only:: starlingx
|
|
||||||
|
|
||||||
For Rook container-based Ceph:
|
|
||||||
|
|
||||||
#. Initialize with add ceph-rook backend:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
system storage-backend-add ceph-rook --confirmed
|
|
||||||
|
|
||||||
#. Assign Rook host labels to controller-0 in support of installing the
|
|
||||||
rook-ceph-apps manifest/helm-charts later:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
system host-label-assign controller-0 ceph-mon-placement=enabled
|
|
||||||
system host-label-assign controller-0 ceph-mgr-placement=enabled
|
|
||||||
|
|
||||||
.. only:: openstack
|
.. only:: openstack
|
||||||
|
|
||||||
*************************************
|
*************************************
|
||||||
@ -387,7 +263,7 @@ For host-based Ceph:
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
**This step is required only if the StarlingX OpenStack application
|
**These steps are required only if the StarlingX OpenStack application
|
||||||
(stx-openstack) will be installed.**
|
(stx-openstack) will be installed.**
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||||
@ -498,6 +374,174 @@ For host-based Ceph:
|
|||||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
sleep 2
|
sleep 2
|
||||||
|
|
||||||
|
#. **For OpenStack only:** Configure data interfaces for controller-0.
|
||||||
|
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||||
|
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||||
|
underlying assigned Data Network.
|
||||||
|
|
||||||
|
.. important::
|
||||||
|
|
||||||
|
A compute-labeled All-in-one controller host **MUST** have at least one Data class interface.
|
||||||
|
|
||||||
|
* Configure the data interfaces for controller-0.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
export NODE=controller-0
|
||||||
|
|
||||||
|
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
||||||
|
# based on displayed linux port name, pci address and device type.
|
||||||
|
system host-port-list ${NODE}
|
||||||
|
|
||||||
|
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||||
|
# find the interfaces corresponding to the ports identified in previous step, and
|
||||||
|
# take note of their UUID
|
||||||
|
system host-if-list -a ${NODE}
|
||||||
|
|
||||||
|
# Modify configuration for these interfaces
|
||||||
|
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
||||||
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
||||||
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
||||||
|
|
||||||
|
# Create Data Networks that vswitch 'data' interfaces will be connected to
|
||||||
|
DATANET0='datanet0'
|
||||||
|
DATANET1='datanet1'
|
||||||
|
system datanetwork-add ${DATANET0} vlan
|
||||||
|
system datanetwork-add ${DATANET1} vlan
|
||||||
|
|
||||||
|
# Assign Data Networks to Data Interfaces
|
||||||
|
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
|
||||||
|
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
*****************************************
|
||||||
|
Optionally Configure PCI-SRIOV Interfaces
|
||||||
|
*****************************************
|
||||||
|
|
||||||
|
#. **Optionally**, configure pci-sriov interfaces for controller-0.
|
||||||
|
|
||||||
|
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
|
||||||
|
network attachments in hosted application containers.
|
||||||
|
|
||||||
|
.. only:: openstack
|
||||||
|
|
||||||
|
This step is **optional** for OpenStack. Do this step if using |SRIOV|
|
||||||
|
vNICs in hosted application VMs. Note that pci-sriov interfaces can
|
||||||
|
have the same Data Networks assigned to them as vswitch data interfaces.
|
||||||
|
|
||||||
|
|
||||||
|
* Configure the pci-sriov interfaces for controller-0.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
export NODE=controller-0
|
||||||
|
|
||||||
|
# List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces,
|
||||||
|
# based on displayed linux port name, pci address and device type.
|
||||||
|
system host-port-list ${NODE}
|
||||||
|
|
||||||
|
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||||
|
# find the interfaces corresponding to the ports identified in previous step, and
|
||||||
|
# take note of their UUID
|
||||||
|
system host-if-list -a ${NODE}
|
||||||
|
|
||||||
|
# Modify configuration for these interfaces
|
||||||
|
# Configuring them as ‘pci-sriov’ class interfaces, MTU of 1500 and named sriov#
|
||||||
|
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
||||||
|
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
||||||
|
|
||||||
|
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
|
||||||
|
DATANET0='datanet0'
|
||||||
|
DATANET1='datanet1'
|
||||||
|
system datanetwork-add ${DATANET0} vlan
|
||||||
|
system datanetwork-add ${DATANET1} vlan
|
||||||
|
|
||||||
|
# Assign Data Networks to PCI-SRIOV Interfaces
|
||||||
|
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
|
||||||
|
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
|
||||||
|
* To enable using |SRIOV| network attachments for the above interfaces in
|
||||||
|
Kubernetes hosted application containers:
|
||||||
|
|
||||||
|
* Configure the Kubernetes |SRIOV| device plugin.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
system host-label-assign controller-0 sriovdp=enabled
|
||||||
|
|
||||||
|
* If planning on running |DPDK| in Kubernetes hosted application
|
||||||
|
containers on this host, configure the number of 1G Huge pages required
|
||||||
|
on both |NUMA| nodes.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
||||||
|
system host-memory-modify -f application controller-0 0 -1G 10
|
||||||
|
|
||||||
|
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
|
||||||
|
system host-memory-modify -f application controller-0 1 -1G 10
|
||||||
|
|
||||||
|
***************************************************************
|
||||||
|
If required, initialize a Ceph-based Persistent Storage Backend
|
||||||
|
***************************************************************
|
||||||
|
|
||||||
|
A persistent storage backend is required if your application requires |PVCs|.
|
||||||
|
|
||||||
|
.. only:: openstack
|
||||||
|
|
||||||
|
.. important::
|
||||||
|
|
||||||
|
The StarlingX OpenStack application **requires** |PVCs|.
|
||||||
|
|
||||||
|
.. only:: starlingx
|
||||||
|
|
||||||
|
There are two options for persistent storage backend: the host-based Ceph
|
||||||
|
solution and the Rook container-based Ceph solution.
|
||||||
|
|
||||||
|
For host-based Ceph:
|
||||||
|
|
||||||
|
#. Initialize with add ceph backend:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
system storage-backend-add ceph --confirmed
|
||||||
|
|
||||||
|
#. Add an |OSD| on controller-0 for host-based Ceph:
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
# List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
||||||
|
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
||||||
|
system host-disk-list controller-0
|
||||||
|
|
||||||
|
# Add disk as an OSD storage
|
||||||
|
system host-stor-add controller-0 osd <disk-uuid>
|
||||||
|
|
||||||
|
# List OSD storage devices
|
||||||
|
system host-stor-list controller-0
|
||||||
|
|
||||||
|
|
||||||
|
# Add disk as an OSD storage
|
||||||
|
system host-stor-add controller-0 osd <disk-uuid>
|
||||||
|
|
||||||
|
.. only:: starlingx
|
||||||
|
|
||||||
|
For Rook container-based Ceph:
|
||||||
|
|
||||||
|
#. Initialize with add ceph-rook backend:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
system storage-backend-add ceph-rook --confirmed
|
||||||
|
|
||||||
|
#. Assign Rook host labels to controller-0 in support of installing the
|
||||||
|
rook-ceph-apps manifest/helm-charts later:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
system host-label-assign controller-0 ceph-mon-placement=enabled
|
||||||
|
system host-label-assign controller-0 ceph-mgr-placement=enabled
|
||||||
|
|
||||||
|
|
||||||
-------------------
|
-------------------
|
||||||
Unlock controller-0
|
Unlock controller-0
|
||||||
@ -579,107 +623,7 @@ Configure controller-1
|
|||||||
|
|
||||||
system interface-network-assign controller-1 mgmt0 cluster-host
|
system interface-network-assign controller-1 mgmt0 cluster-host
|
||||||
|
|
||||||
#. Configure data interfaces for controller-1. Use the DATA port names, for
|
.. only:: openstack
|
||||||
example eth0, applicable to your deployment environment.
|
|
||||||
|
|
||||||
This step is optional for Kubernetes. Do this step if using |SRIOV|
|
|
||||||
network attachments in hosted application containers.
|
|
||||||
|
|
||||||
.. only:: starlingx
|
|
||||||
|
|
||||||
.. important::
|
|
||||||
|
|
||||||
This step is **required** for OpenStack.
|
|
||||||
|
|
||||||
|
|
||||||
* Configure the data interfaces
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
export NODE=controller-1
|
|
||||||
|
|
||||||
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
|
||||||
# based on displayed linux port name, pci address and device type.
|
|
||||||
system host-port-list ${NODE}
|
|
||||||
|
|
||||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
|
||||||
# find the interfaces corresponding to the ports identified in previous step, and
|
|
||||||
# take note of their UUID
|
|
||||||
system host-if-list -a ${NODE}
|
|
||||||
|
|
||||||
# Modify configuration for these interfaces
|
|
||||||
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
|
||||||
|
|
||||||
# Previouly created Data Networks
|
|
||||||
PHYSNET0='physnet0'
|
|
||||||
PHYSNET1='physnet1'
|
|
||||||
|
|
||||||
# Assign Data Networks to Data Interfaces
|
|
||||||
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${PHYSNET0}
|
|
||||||
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${PHYSNET1}
|
|
||||||
|
|
||||||
|
|
||||||
* To enable using |SRIOV| network attachments for the above interfaes in
|
|
||||||
Kubernetes hosted application containers:
|
|
||||||
|
|
||||||
* Configure the Kubernetes |SRIOV| device plugin:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
system host-label-assign controller-1 sriovdp=enabled
|
|
||||||
|
|
||||||
* If planning on running |DPDK| in Kubernetes hosted application containers
|
|
||||||
on this host, configure the number of 1G Huge pages required on both
|
|
||||||
|NUMA| nodes:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
|
||||||
system host-memory-modify -f application controller-1 0 -1G 10
|
|
||||||
|
|
||||||
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
|
|
||||||
system host-memory-modify -f application controller-1 1 -1G 10
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
***************************************************************************************
|
|
||||||
If configuring a Ceph-based Persistent Storage Backend, configure host-specific details
|
|
||||||
***************************************************************************************
|
|
||||||
|
|
||||||
For host-based Ceph:
|
|
||||||
|
|
||||||
#. Add an |OSD| on controller-1 for host-based Ceph:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
# List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
|
||||||
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
|
||||||
system host-disk-list controller-0
|
|
||||||
|
|
||||||
# Add disk as an OSD storage
|
|
||||||
system host-stor-add controller-0 osd <disk-uuid>
|
|
||||||
|
|
||||||
# List OSD storage devices
|
|
||||||
system host-stor-list controller-0
|
|
||||||
|
|
||||||
# Add disk as an OSD storage
|
|
||||||
system host-stor-add controller-0 osd <disk-uuid>
|
|
||||||
|
|
||||||
|
|
||||||
.. only:: starlingx
|
|
||||||
|
|
||||||
For Rook container-based Ceph:
|
|
||||||
|
|
||||||
#. Assign Rook host labels to controller-1 in support of installing the
|
|
||||||
rook-ceph-apps manifest/helm-charts later:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
system host-label-assign controller-1 ceph-mon-placement=enabled
|
|
||||||
system host-label-assign controller-1 ceph-mgr-placement=enabled
|
|
||||||
|
|
||||||
|
|
||||||
*************************************
|
*************************************
|
||||||
OpenStack-specific host configuration
|
OpenStack-specific host configuration
|
||||||
@ -687,7 +631,7 @@ For host-based Ceph:
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
**This step is required only if the StarlingX OpenStack application
|
**These steps are required only if the StarlingX OpenStack application
|
||||||
(stx-openstack) will be installed.**
|
(stx-openstack) will be installed.**
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
|
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
|
||||||
@ -762,6 +706,151 @@ For host-based Ceph:
|
|||||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
sleep 2
|
sleep 2
|
||||||
|
|
||||||
|
#. **For OpenStack only:** Configure data interfaces for controller-1.
|
||||||
|
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||||
|
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||||
|
underlying assigned Data Network.
|
||||||
|
|
||||||
|
.. important::
|
||||||
|
|
||||||
|
A compute-labeled All-in-one controller host **MUST** have at least one Data class interface.
|
||||||
|
|
||||||
|
* Configure the data interfaces for controller-1.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
export NODE=controller-1
|
||||||
|
|
||||||
|
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
||||||
|
# based on displayed linux port name, pci address and device type.
|
||||||
|
system host-port-list ${NODE}
|
||||||
|
|
||||||
|
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||||
|
# find the interfaces corresponding to the ports identified in previous step, and
|
||||||
|
# take note of their UUID
|
||||||
|
system host-if-list -a ${NODE}
|
||||||
|
|
||||||
|
# Modify configuration for these interfaces
|
||||||
|
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
||||||
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
||||||
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
||||||
|
|
||||||
|
# Create Data Networks that vswitch 'data' interfaces will be connected to
|
||||||
|
DATANET0='datanet0'
|
||||||
|
DATANET1='datanet1'
|
||||||
|
system datanetwork-add ${DATANET0} vlan
|
||||||
|
system datanetwork-add ${DATANET1} vlan
|
||||||
|
|
||||||
|
# Assign Data Networks to Data Interfaces
|
||||||
|
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
|
||||||
|
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
*****************************************
|
||||||
|
Optionally Configure PCI-SRIOV Interfaces
|
||||||
|
*****************************************
|
||||||
|
|
||||||
|
#. **Optionally**, configure pci-sriov interfaces for controller-1.
|
||||||
|
|
||||||
|
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
|
||||||
|
network attachments in hosted application containers.
|
||||||
|
|
||||||
|
.. only:: openstack
|
||||||
|
|
||||||
|
This step is **optional** for OpenStack. Do this step if using |SRIOV|
|
||||||
|
vNICs in hosted application VMs. Note that pci-sriov interfaces can
|
||||||
|
have the same Data Networks assigned to them as vswitch data interfaces.
|
||||||
|
|
||||||
|
|
||||||
|
* Configure the pci-sriov interfaces for controller-1.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
export NODE=controller-1
|
||||||
|
|
||||||
|
# List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces,
|
||||||
|
# based on displayed linux port name, pci address and device type.
|
||||||
|
system host-port-list ${NODE}
|
||||||
|
|
||||||
|
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||||
|
# find the interfaces corresponding to the ports identified in previous step, and
|
||||||
|
# take note of their UUID
|
||||||
|
system host-if-list -a ${NODE}
|
||||||
|
|
||||||
|
# Modify configuration for these interfaces
|
||||||
|
# Configuring them as ‘pci-sriov’ class interfaces, MTU of 1500 and named sriov#
|
||||||
|
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
||||||
|
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
||||||
|
|
||||||
|
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
|
||||||
|
DATANET0='datanet0'
|
||||||
|
DATANET1='datanet1'
|
||||||
|
system datanetwork-add ${DATANET0} vlan
|
||||||
|
system datanetwork-add ${DATANET1} vlan
|
||||||
|
|
||||||
|
# Assign Data Networks to PCI-SRIOV Interfaces
|
||||||
|
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
|
||||||
|
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
|
||||||
|
* To enable using |SRIOV| network attachments for the above interfaces in
|
||||||
|
Kubernetes hosted application containers:
|
||||||
|
|
||||||
|
* Configure the Kubernetes |SRIOV| device plugin.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
system host-label-assign controller-1 sriovdp=enabled
|
||||||
|
|
||||||
|
* If planning on running |DPDK| in Kubernetes hosted application
|
||||||
|
containers on this host, configure the number of 1G Huge pages required
|
||||||
|
on both |NUMA| nodes.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
# assign 10x 1G huge page on processor/numa-node 0 on controller-1 to applications
|
||||||
|
system host-memory-modify -f application controller-1 0 -1G 10
|
||||||
|
|
||||||
|
# assign 10x 1G huge page on processor/numa-node 1 on controller-1 to applications
|
||||||
|
system host-memory-modify -f application controller-1 1 -1G 10
|
||||||
|
|
||||||
|
|
||||||
|
***************************************************************************************
|
||||||
|
If configuring a Ceph-based Persistent Storage Backend, configure host-specific details
|
||||||
|
***************************************************************************************
|
||||||
|
|
||||||
|
For host-based Ceph:
|
||||||
|
|
||||||
|
#. Add an |OSD| on controller-1 for host-based Ceph:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
# List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
||||||
|
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
||||||
|
system host-disk-list controller-1
|
||||||
|
|
||||||
|
# Add disk as an OSD storage
|
||||||
|
system host-stor-add controller-1 osd <disk-uuid>
|
||||||
|
|
||||||
|
# List OSD storage devices
|
||||||
|
system host-stor-list controller-1
|
||||||
|
|
||||||
|
# Add disk as an OSD storage
|
||||||
|
system host-stor-add controller-1 osd <disk-uuid>
|
||||||
|
|
||||||
|
|
||||||
|
.. only:: starlingx
|
||||||
|
|
||||||
|
For Rook container-based Ceph:
|
||||||
|
|
||||||
|
#. Assign Rook host labels to controller-1 in support of installing the
|
||||||
|
rook-ceph-apps manifest/helm-charts later:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
system host-label-assign controller-1 ceph-mon-placement=enabled
|
||||||
|
system host-label-assign controller-1 ceph-mgr-placement=enabled
|
||||||
|
|
||||||
|
|
||||||
-------------------
|
-------------------
|
||||||
Unlock controller-1
|
Unlock controller-1
|
||||||
-------------------
|
-------------------
|
||||||
|
@ -149,20 +149,20 @@ Bootstrap system on controller-0
|
|||||||
.. code-block::
|
.. code-block::
|
||||||
|
|
||||||
docker_registries:
|
docker_registries:
|
||||||
quay.io:
|
quay.io:
|
||||||
url: myprivateregistry.abc.com:9001/quay.io
|
url: myprivateregistry.abc.com:9001/quay.io
|
||||||
docker.elastic.co:
|
docker.elastic.co:
|
||||||
url: myprivateregistry.abc.com:9001/docker.elastic.co
|
url: myprivateregistry.abc.com:9001/docker.elastic.co
|
||||||
gcr.io:
|
gcr.io:
|
||||||
url: myprivateregistry.abc.com:9001/gcr.io
|
url: myprivateregistry.abc.com:9001/gcr.io
|
||||||
k8s.gcr.io:
|
k8s.gcr.io:
|
||||||
url: myprivateregistry.abc.com:9001/k8s.gcr.io
|
url: myprivateregistry.abc.com:9001/k8s.gcr.io
|
||||||
docker.io:
|
docker.io:
|
||||||
url: myprivateregistry.abc.com:9001/docker.io
|
url: myprivateregistry.abc.com:9001/docker.io
|
||||||
defaults:
|
defaults:
|
||||||
type: docker
|
type: docker
|
||||||
username: <your_myprivateregistry.abc.com_username>
|
username: <your_myprivateregistry.abc.com_username>
|
||||||
password: <your_myprivateregistry.abc.com_password>
|
password: <your_myprivateregistry.abc.com_password>
|
||||||
|
|
||||||
# Add the CA Certificate that signed myprivateregistry.abc.com’s
|
# Add the CA Certificate that signed myprivateregistry.abc.com’s
|
||||||
# certificate as a Trusted CA
|
# certificate as a Trusted CA
|
||||||
@ -236,130 +236,6 @@ The newly installed controller needs to be configured.
|
|||||||
|
|
||||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||||
|
|
||||||
#. Configure data interfaces for controller-0. Use the DATA port names, for example
|
|
||||||
eth0, applicable to your deployment environment.
|
|
||||||
|
|
||||||
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
|
|
||||||
network attachments in hosted application containers.
|
|
||||||
|
|
||||||
.. only:: starlingx
|
|
||||||
|
|
||||||
.. important::
|
|
||||||
|
|
||||||
This step is **required** for OpenStack.
|
|
||||||
|
|
||||||
* Configure the data interfaces.
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
export NODE=controller-0
|
|
||||||
|
|
||||||
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
|
||||||
# based on displayed linux port name, pci address and device type.
|
|
||||||
system host-port-list ${NODE}
|
|
||||||
|
|
||||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
|
||||||
# find the interfaces corresponding to the ports identified in previous step, and
|
|
||||||
# take note of their UUID
|
|
||||||
system host-if-list -a ${NODE}
|
|
||||||
|
|
||||||
# Modify configuration for these interfaces
|
|
||||||
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
|
||||||
|
|
||||||
# Create Data Networks
|
|
||||||
PHYSNET0='physnet0'
|
|
||||||
PHYSNET1='physnet1'
|
|
||||||
system datanetwork-add ${PHYSNET0} vlan
|
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
|
||||||
|
|
||||||
# Assign Data Networks to Data Interfaces
|
|
||||||
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${PHYSNET0}
|
|
||||||
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${PHYSNET1}
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* To enable using |SRIOV| network attachments for the above interfaces in
|
|
||||||
Kubernetes hosted application containers:
|
|
||||||
|
|
||||||
* Configure the Kubernetes |SRIOV| device plugin.
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
system host-label-assign controller-0 sriovdp=enabled
|
|
||||||
|
|
||||||
* If planning on running |DPDK| in Kubernetes hosted application
|
|
||||||
containers on this host, configure the number of 1G Huge pages required
|
|
||||||
on both |NUMA| nodes.
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
|
||||||
system host-memory-modify -f application controller-0 0 -1G 10
|
|
||||||
|
|
||||||
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
|
|
||||||
system host-memory-modify -f application controller-0 1 -1G 10
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
***************************************************************
|
|
||||||
If required, initialize a Ceph-based Persistent Storage Backend
|
|
||||||
***************************************************************
|
|
||||||
|
|
||||||
A persistent storage backend is required if your application requires
|
|
||||||
|PVCs|.
|
|
||||||
|
|
||||||
.. only:: starlingx
|
|
||||||
|
|
||||||
.. important::
|
|
||||||
|
|
||||||
The StarlingX OpenStack application **requires** |PVCs|.
|
|
||||||
|
|
||||||
There are two options for persistent storage backend: the host-based Ceph
|
|
||||||
solution and the Rook container-based Ceph solution.
|
|
||||||
|
|
||||||
For host-based Ceph:
|
|
||||||
|
|
||||||
#. Add host-based Ceph backend:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
system storage-backend-add ceph --confirmed
|
|
||||||
|
|
||||||
#. Add an |OSD| on controller-0 for host-based Ceph:
|
|
||||||
|
|
||||||
.. code-block:: bash
|
|
||||||
|
|
||||||
# List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
|
||||||
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
|
||||||
system host-disk-list controller-0
|
|
||||||
|
|
||||||
# Add disk as an OSD storage
|
|
||||||
system host-stor-add controller-0 osd <disk-uuid>
|
|
||||||
|
|
||||||
# List OSD storage devices
|
|
||||||
system host-stor-list controller-0
|
|
||||||
|
|
||||||
|
|
||||||
.. only:: starlingx
|
|
||||||
|
|
||||||
For Rook container-based Ceph:
|
|
||||||
|
|
||||||
#. Add Rook container-based backend:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
system storage-backend-add ceph-rook --confirmed
|
|
||||||
|
|
||||||
#. Assign Rook host labels to controller-0 in support of installing the
|
|
||||||
rook-ceph-apps manifest/helm-charts later:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
system host-label-assign controller-0 ceph-mon-placement=enabled
|
|
||||||
system host-label-assign controller-0 ceph-mgr-placement=enabled
|
|
||||||
|
|
||||||
.. only:: openstack
|
.. only:: openstack
|
||||||
|
|
||||||
*************************************
|
*************************************
|
||||||
@ -370,7 +246,7 @@ For host-based Ceph:
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
**This step is required only if the StarlingX OpenStack application
|
**These steps are required only if the StarlingX OpenStack application
|
||||||
(stx-openstack) will be installed.**
|
(stx-openstack) will be installed.**
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||||
@ -472,6 +348,175 @@ For host-based Ceph:
|
|||||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
sleep 2
|
sleep 2
|
||||||
|
|
||||||
|
#. **For OpenStack only:** Configure data interfaces for controller-0.
|
||||||
|
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||||
|
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||||
|
underlying assigned Data Network.
|
||||||
|
|
||||||
|
.. important::
|
||||||
|
|
||||||
|
A compute-labeled worker host **MUST** have at least one Data class interface.
|
||||||
|
|
||||||
|
* Configure the data interfaces for controller-0.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
export NODE=controller-0
|
||||||
|
|
||||||
|
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
||||||
|
# based on displayed linux port name, pci address and device type.
|
||||||
|
system host-port-list ${NODE}
|
||||||
|
|
||||||
|
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||||
|
# find the interfaces corresponding to the ports identified in previous step, and
|
||||||
|
# take note of their UUID
|
||||||
|
system host-if-list -a ${NODE}
|
||||||
|
|
||||||
|
# Modify configuration for these interfaces
|
||||||
|
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
||||||
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
||||||
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
||||||
|
|
||||||
|
# Create Data Networks that vswitch 'data' interfaces will be connected to
|
||||||
|
DATANET0='datanet0'
|
||||||
|
DATANET1='datanet1'
|
||||||
|
system datanetwork-add ${DATANET0} vlan
|
||||||
|
system datanetwork-add ${DATANET1} vlan
|
||||||
|
|
||||||
|
# Assign Data Networks to Data Interfaces
|
||||||
|
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
|
||||||
|
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
|
||||||
|
*****************************************
|
||||||
|
Optionally Configure PCI-SRIOV Interfaces
|
||||||
|
*****************************************
|
||||||
|
|
||||||
|
#. **Optionally**, configure pci-sriov interfaces for controller-0.
|
||||||
|
|
||||||
|
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
|
||||||
|
network attachments in hosted application containers.
|
||||||
|
|
||||||
|
.. only:: openstack
|
||||||
|
|
||||||
|
This step is **optional** for OpenStack. Do this step if using |SRIOV|
|
||||||
|
vNICs in hosted application VMs. Note that pci-sriov interfaces can
|
||||||
|
have the same Data Networks assigned to them as vswitch data interfaces.
|
||||||
|
|
||||||
|
|
||||||
|
* Configure the pci-sriov interfaces for controller-0.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
export NODE=controller-0
|
||||||
|
|
||||||
|
# List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces,
|
||||||
|
# based on displayed linux port name, pci address and device type.
|
||||||
|
system host-port-list ${NODE}
|
||||||
|
|
||||||
|
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||||
|
# find the interfaces corresponding to the ports identified in previous step, and
|
||||||
|
# take note of their UUID
|
||||||
|
system host-if-list -a ${NODE}
|
||||||
|
|
||||||
|
# Modify configuration for these interfaces
|
||||||
|
# Configuring them as ‘pci-sriov’ class interfaces, MTU of 1500 and named sriov#
|
||||||
|
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
||||||
|
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
||||||
|
|
||||||
|
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
|
||||||
|
DATANET0='datanet0'
|
||||||
|
DATANET1='datanet1'
|
||||||
|
system datanetwork-add ${DATANET0} vlan
|
||||||
|
system datanetwork-add ${DATANET1} vlan
|
||||||
|
|
||||||
|
# Assign Data Networks to PCI-SRIOV Interfaces
|
||||||
|
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
|
||||||
|
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
|
||||||
|
* To enable using |SRIOV| network attachments for the above interfaces in
|
||||||
|
Kubernetes hosted application containers:
|
||||||
|
|
||||||
|
* Configure the Kubernetes |SRIOV| device plugin.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
system host-label-assign controller-0 sriovdp=enabled
|
||||||
|
|
||||||
|
* If planning on running |DPDK| in Kubernetes hosted application
|
||||||
|
containers on this host, configure the number of 1G Huge pages required
|
||||||
|
on both |NUMA| nodes.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
||||||
|
system host-memory-modify -f application controller-0 0 -1G 10
|
||||||
|
|
||||||
|
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
|
||||||
|
system host-memory-modify -f application controller-0 1 -1G 10
|
||||||
|
|
||||||
|
|
||||||
|
***************************************************************
|
||||||
|
If required, initialize a Ceph-based Persistent Storage Backend
|
||||||
|
***************************************************************
|
||||||
|
|
||||||
|
A persistent storage backend is required if your application requires
|
||||||
|
|PVCs|.
|
||||||
|
|
||||||
|
.. only:: openstack
|
||||||
|
|
||||||
|
.. important::
|
||||||
|
|
||||||
|
The StarlingX OpenStack application **requires** |PVCs|.
|
||||||
|
|
||||||
|
.. only:: starlingx
|
||||||
|
|
||||||
|
There are two options for persistent storage backend: the host-based Ceph
|
||||||
|
solution and the Rook container-based Ceph solution.
|
||||||
|
|
||||||
|
For host-based Ceph:
|
||||||
|
|
||||||
|
#. Add host-based Ceph backend:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
system storage-backend-add ceph --confirmed
|
||||||
|
|
||||||
|
#. Add an |OSD| on controller-0 for host-based Ceph:
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
# List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
||||||
|
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
||||||
|
system host-disk-list controller-0
|
||||||
|
|
||||||
|
# Add disk as an OSD storage
|
||||||
|
system host-stor-add controller-0 osd <disk-uuid>
|
||||||
|
|
||||||
|
# List OSD storage devices
|
||||||
|
system host-stor-list controller-0
|
||||||
|
|
||||||
|
|
||||||
|
.. only:: starlingx
|
||||||
|
|
||||||
|
For Rook container-based Ceph:
|
||||||
|
|
||||||
|
#. Add Rook container-based backend:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
system storage-backend-add ceph-rook --confirmed
|
||||||
|
|
||||||
|
#. Assign Rook host labels to controller-0 in support of installing the
|
||||||
|
rook-ceph-apps manifest/helm-charts later:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
system host-label-assign controller-0 ceph-mon-placement=enabled
|
||||||
|
system host-label-assign controller-0 ceph-mgr-placement=enabled
|
||||||
|
|
||||||
|
|
||||||
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
|
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
|
||||||
|
|
||||||
|
|
||||||
|
@ -150,20 +150,20 @@ Bootstrap system on controller-0
|
|||||||
.. code-block::
|
.. code-block::
|
||||||
|
|
||||||
docker_registries:
|
docker_registries:
|
||||||
quay.io:
|
quay.io:
|
||||||
url: myprivateregistry.abc.com:9001/quay.io
|
url: myprivateregistry.abc.com:9001/quay.io
|
||||||
docker.elastic.co:
|
docker.elastic.co:
|
||||||
url: myprivateregistry.abc.com:9001/docker.elastic.co
|
url: myprivateregistry.abc.com:9001/docker.elastic.co
|
||||||
gcr.io:
|
gcr.io:
|
||||||
url: myprivateregistry.abc.com:9001/gcr.io
|
url: myprivateregistry.abc.com:9001/gcr.io
|
||||||
k8s.gcr.io:
|
k8s.gcr.io:
|
||||||
url: myprivateregistry.abc.com:9001/k8s.gcr.io
|
url: myprivateregistry.abc.com:9001/k8s.gcr.io
|
||||||
docker.io:
|
docker.io:
|
||||||
url: myprivateregistry.abc.com:9001/docker.io
|
url: myprivateregistry.abc.com:9001/docker.io
|
||||||
defaults:
|
defaults:
|
||||||
type: docker
|
type: docker
|
||||||
username: <your_myprivateregistry.abc.com_username>
|
username: <your_myprivateregistry.abc.com_username>
|
||||||
password: <your_myprivateregistry.abc.com_password>
|
password: <your_myprivateregistry.abc.com_password>
|
||||||
|
|
||||||
# Add the CA Certificate that signed myprivateregistry.abc.com’s
|
# Add the CA Certificate that signed myprivateregistry.abc.com’s
|
||||||
# certificate as a Trusted CA
|
# certificate as a Trusted CA
|
||||||
@ -258,16 +258,15 @@ Configure controller-0
|
|||||||
|
|
||||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||||
|
|
||||||
#. Configure Ceph storage backend:
|
#. If required, configure Ceph storage backend:
|
||||||
|
|
||||||
This step is required only if your application requires persistent storage.
|
A persistent storage backend is required if your application requires |PVCs|.
|
||||||
|
|
||||||
.. only:: starlingx
|
.. only:: openstack
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
**If you want to install the StarlingX Openstack application
|
The StarlingX OpenStack application **requires** |PVCs|.
|
||||||
(stx-openstack), this step is mandatory.**
|
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -281,7 +280,7 @@ Configure controller-0
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
**This step is required only if the StarlingX OpenStack application
|
**These steps are required only if the StarlingX OpenStack application
|
||||||
(stx-openstack) will be installed.**
|
(stx-openstack) will be installed.**
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||||
@ -523,78 +522,6 @@ Configure worker nodes
|
|||||||
system interface-network-assign $NODE mgmt0 cluster-host
|
system interface-network-assign $NODE mgmt0 cluster-host
|
||||||
done
|
done
|
||||||
|
|
||||||
#. Configure data interfaces for worker nodes. Use the DATA port names, for
|
|
||||||
example eth0, that are applicable to your deployment environment.
|
|
||||||
|
|
||||||
This step is optional for Kubernetes: Do this step if using |SRIOV| network
|
|
||||||
attachments in hosted application containers.
|
|
||||||
|
|
||||||
.. only:: starlingx
|
|
||||||
|
|
||||||
.. important::
|
|
||||||
|
|
||||||
This step is **required** for OpenStack.
|
|
||||||
|
|
||||||
* Configure the data interfaces
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
# Execute the following lines with
|
|
||||||
export NODE=worker-0
|
|
||||||
# and then repeat with
|
|
||||||
export NODE=worker-1
|
|
||||||
|
|
||||||
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
|
||||||
# based on displayed linux port name, pci address and device type.
|
|
||||||
system host-port-list ${NODE}
|
|
||||||
|
|
||||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
|
||||||
# find the interfaces corresponding to the ports identified in previous step, and
|
|
||||||
# take note of their UUID
|
|
||||||
system host-if-list -a ${NODE}
|
|
||||||
|
|
||||||
# Modify configuration for these interfaces
|
|
||||||
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
|
||||||
|
|
||||||
# Create Data Networks
|
|
||||||
PHYSNET0='physnet0'
|
|
||||||
PHYSNET1='physnet1'
|
|
||||||
system datanetwork-add ${PHYSNET0} vlan
|
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
|
||||||
|
|
||||||
# Assign Data Networks to Data Interfaces
|
|
||||||
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${PHYSNET0}
|
|
||||||
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${PHYSNET1}
|
|
||||||
|
|
||||||
* To enable using |SRIOV| network attachments for the above interfaces in
|
|
||||||
Kubernetes hosted application containers:
|
|
||||||
|
|
||||||
* Configure |SRIOV| device plug in:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
for NODE in worker-0 worker-1; do
|
|
||||||
system host-label-assign ${NODE} sriovdp=enabled
|
|
||||||
done
|
|
||||||
|
|
||||||
* If planning on running DPDK in containers on this host, configure the number
|
|
||||||
of 1G Huge pages required on both |NUMA| nodes:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
for NODE in worker-0 worker-1; do
|
|
||||||
|
|
||||||
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
|
|
||||||
system host-memory-modify -f application $NODE 0 -1G 10
|
|
||||||
|
|
||||||
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
|
|
||||||
system host-memory-modify -f application $NODE 1 -1G 10
|
|
||||||
|
|
||||||
done
|
|
||||||
|
|
||||||
|
|
||||||
.. only:: openstack
|
.. only:: openstack
|
||||||
|
|
||||||
*************************************
|
*************************************
|
||||||
@ -603,7 +530,7 @@ Configure worker nodes
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
**This step is required only if the StarlingX OpenStack application
|
**These steps are required only if the StarlingX OpenStack application
|
||||||
(stx-openstack) will be installed.**
|
(stx-openstack) will be installed.**
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||||
@ -612,18 +539,18 @@ Configure worker nodes
|
|||||||
::
|
::
|
||||||
|
|
||||||
for NODE in worker-0 worker-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-label-assign $NODE openstack-compute-node=enabled
|
system host-label-assign $NODE openstack-compute-node=enabled
|
||||||
system host-label-assign $NODE openvswitch=enabled
|
system host-label-assign $NODE openvswitch=enabled
|
||||||
system host-label-assign $NODE sriov=enabled
|
system host-label-assign $NODE sriov=enabled
|
||||||
done
|
done
|
||||||
|
|
||||||
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
||||||
|
|
||||||
**If using OVS-DPDK vswitch, run the following commands:**
|
**If using OVS-DPDK vswitch, run the following commands:**
|
||||||
|
|
||||||
Default recommendation for worker node is to use a single core on each numa-node
|
Default recommendation for worker node is to use a single core on each
|
||||||
for |OVS|-|DPDK| vswitch. This should have been automatically configured,
|
numa-node for |OVS|-|DPDK| vswitch. This should have been automatically
|
||||||
if not run the following command.
|
configured, if not run the following command.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -638,8 +565,9 @@ Configure worker nodes
|
|||||||
done
|
done
|
||||||
|
|
||||||
|
|
||||||
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on each |NUMA| node
|
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on
|
||||||
where vswitch is running on this host, with the following command:
|
each |NUMA| node where vswitch is running on this host, with the
|
||||||
|
following command:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -660,8 +588,8 @@ Configure worker nodes
|
|||||||
huge pages to enable networking and must use a flavor with property:
|
huge pages to enable networking and must use a flavor with property:
|
||||||
hw:mem_page_size=large
|
hw:mem_page_size=large
|
||||||
|
|
||||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for this host with
|
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
|
||||||
the command:
|
this host with the command:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -675,22 +603,142 @@ Configure worker nodes
|
|||||||
|
|
||||||
done
|
done
|
||||||
|
|
||||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
|
||||||
which is needed for stx-openstack nova ephemeral disks.
|
needed for stx-openstack nova ephemeral disks.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for NODE in worker-0 worker-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring Nova local for: $NODE"
|
echo "Configuring Nova local for: $NODE"
|
||||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||||
PARTITION_SIZE=10
|
PARTITION_SIZE=10
|
||||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||||
system host-lvg-add ${NODE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
done
|
done
|
||||||
|
|
||||||
|
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
||||||
|
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||||
|
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||||
|
underlying assigned Data Network.
|
||||||
|
|
||||||
|
.. important::
|
||||||
|
|
||||||
|
A compute-labeled worker host **MUST** have at least one Data class interface.
|
||||||
|
|
||||||
|
* Configure the data interfaces for worker nodes.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
# Execute the following lines with
|
||||||
|
export NODE=worker-0
|
||||||
|
# and then repeat with
|
||||||
|
export NODE=worker-1
|
||||||
|
|
||||||
|
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
||||||
|
# based on displayed linux port name, pci address and device type.
|
||||||
|
system host-port-list ${NODE}
|
||||||
|
|
||||||
|
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||||
|
# find the interfaces corresponding to the ports identified in previous step, and
|
||||||
|
# take note of their UUID
|
||||||
|
system host-if-list -a ${NODE}
|
||||||
|
|
||||||
|
# Modify configuration for these interfaces
|
||||||
|
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
||||||
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
||||||
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
||||||
|
|
||||||
|
# Create Data Networks that vswitch 'data' interfaces will be connected to
|
||||||
|
DATANET0='datanet0'
|
||||||
|
DATANET1='datanet1'
|
||||||
|
system datanetwork-add ${DATANET0} vlan
|
||||||
|
system datanetwork-add ${DATANET1} vlan
|
||||||
|
|
||||||
|
# Assign Data Networks to Data Interfaces
|
||||||
|
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
|
||||||
|
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
*****************************************
|
||||||
|
Optionally Configure PCI-SRIOV Interfaces
|
||||||
|
*****************************************
|
||||||
|
|
||||||
|
#. **Optionally**, configure pci-sriov interfaces for worker nodes.
|
||||||
|
|
||||||
|
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
|
||||||
|
network attachments in hosted application containers.
|
||||||
|
|
||||||
|
.. only:: openstack
|
||||||
|
|
||||||
|
This step is **optional** for OpenStack. Do this step if using |SRIOV|
|
||||||
|
vNICs in hosted application VMs. Note that pci-sriov interfaces can
|
||||||
|
have the same Data Networks assigned to them as vswitch data interfaces.
|
||||||
|
|
||||||
|
|
||||||
|
* Configure the pci-sriov interfaces for worker nodes.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
# Execute the following lines with
|
||||||
|
export NODE=worker-0
|
||||||
|
# and then repeat with
|
||||||
|
export NODE=worker-1
|
||||||
|
|
||||||
|
# List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces,
|
||||||
|
# based on displayed linux port name, pci address and device type.
|
||||||
|
system host-port-list ${NODE}
|
||||||
|
|
||||||
|
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||||
|
# find the interfaces corresponding to the ports identified in previous step, and
|
||||||
|
# take note of their UUID
|
||||||
|
system host-if-list -a ${NODE}
|
||||||
|
|
||||||
|
# Modify configuration for these interfaces
|
||||||
|
# Configuring them as ‘pci-sriov’ class interfaces, MTU of 1500 and named sriov#
|
||||||
|
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
||||||
|
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
||||||
|
|
||||||
|
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
|
||||||
|
DATANET0='datanet0'
|
||||||
|
DATANET1='datanet1'
|
||||||
|
system datanetwork-add ${DATANET0} vlan
|
||||||
|
system datanetwork-add ${DATANET1} vlan
|
||||||
|
|
||||||
|
# Assign Data Networks to PCI-SRIOV Interfaces
|
||||||
|
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
|
||||||
|
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
|
||||||
|
* To enable using |SRIOV| network attachments for the above interfaces in
|
||||||
|
Kubernetes hosted application containers:
|
||||||
|
|
||||||
|
* Configure the Kubernetes |SRIOV| device plugin.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
for NODE in worker-0 worker-1; do
|
||||||
|
system host-label-assign $NODE sriovdp=enabled
|
||||||
|
done
|
||||||
|
|
||||||
|
* If planning on running |DPDK| in Kubernetes hosted application
|
||||||
|
containers on this host, configure the number of 1G Huge pages required
|
||||||
|
on both |NUMA| nodes.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
for NODE in worker-0 worker-1; do
|
||||||
|
|
||||||
|
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
|
||||||
|
system host-memory-modify -f application $NODE 0 -1G 10
|
||||||
|
|
||||||
|
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
|
||||||
|
system host-memory-modify -f application $NODE 1 -1G 10
|
||||||
|
|
||||||
|
done
|
||||||
|
|
||||||
|
|
||||||
--------------------
|
--------------------
|
||||||
Unlock worker nodes
|
Unlock worker nodes
|
||||||
--------------------
|
--------------------
|
||||||
@ -706,49 +754,41 @@ Unlock worker nodes in order to bring them into service:
|
|||||||
The worker nodes will reboot in order to apply configuration changes and come into
|
The worker nodes will reboot in order to apply configuration changes and come into
|
||||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
----------------------------
|
-----------------------------------------------------------------
|
||||||
Add Ceph OSDs to controllers
|
If configuring Ceph Storage Backend, Add Ceph OSDs to controllers
|
||||||
----------------------------
|
-----------------------------------------------------------------
|
||||||
|
|
||||||
#. Add |OSDs| to controller-0. The following example adds |OSDs| to the `sdb` disk:
|
#. Add |OSDs| to controller-0. The following example adds |OSDs| to the `sdb` disk:
|
||||||
|
|
||||||
.. important::
|
|
||||||
|
|
||||||
This step requires a configured Ceph storage backend.
|
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
HOST=controller-0
|
HOST=controller-0
|
||||||
DISKS=$(system host-disk-list ${HOST})
|
|
||||||
TIERS=$(system storage-tier-list ceph_cluster)
|
|
||||||
OSDs="/dev/sdb"
|
|
||||||
for OSD in $OSDs; do
|
|
||||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
|
||||||
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
|
|
||||||
done
|
|
||||||
|
|
||||||
system host-stor-list $HOST
|
# List host's disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
||||||
|
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
||||||
|
system host-disk-list ${HOST}
|
||||||
|
|
||||||
|
# Add disk as an OSD storage
|
||||||
|
system host-stor-add ${HOST} osd <disk-uuid>
|
||||||
|
|
||||||
|
# List OSD storage devices and wait for configuration of newly added OSD to complete.
|
||||||
|
system host-stor-list ${HOST}
|
||||||
|
|
||||||
#. Add |OSDs| to controller-1. The following example adds |OSDs| to the `sdb` disk:
|
#. Add |OSDs| to controller-1. The following example adds |OSDs| to the `sdb` disk:
|
||||||
|
|
||||||
.. important::
|
|
||||||
|
|
||||||
This step requires a configured Ceph storage backend.
|
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
HOST=controller-1
|
HOST=controller-1
|
||||||
DISKS=$(system host-disk-list ${HOST})
|
|
||||||
TIERS=$(system storage-tier-list ceph_cluster)
|
|
||||||
OSDs="/dev/sdb"
|
|
||||||
for OSD in $OSDs; do
|
|
||||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
|
||||||
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
|
|
||||||
done
|
|
||||||
|
|
||||||
::
|
# List host's disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
||||||
|
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
||||||
|
system host-disk-list ${HOST}
|
||||||
|
|
||||||
system host-stor-list $HOST
|
# Add disk as an OSD storage
|
||||||
|
system host-stor-add ${HOST} osd <disk-uuid>
|
||||||
|
|
||||||
|
# List OSD storage devices and wait for configuration of newly added OSD to complete.
|
||||||
|
system host-stor-list ${HOST}
|
||||||
|
|
||||||
.. only:: starlingx
|
.. only:: starlingx
|
||||||
|
|
||||||
|
@ -245,10 +245,12 @@ host machine.
|
|||||||
Configure worker nodes
|
Configure worker nodes
|
||||||
----------------------
|
----------------------
|
||||||
|
|
||||||
#. Assign the cluster-host network to the MGMT interface for the worker nodes:
|
#. The MGMT interfaces are partially set up by the network install procedure;
|
||||||
|
configuring the port used for network install as the MGMT port and
|
||||||
|
specifying the attached network of "mgmt".
|
||||||
|
|
||||||
(Note that the MGMT interfaces are partially set up automatically by the
|
Complete the MGMT interface configuration of the worker nodes by specifying
|
||||||
network install procedure.)
|
the attached network of "cluster-host".
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -256,75 +258,7 @@ Configure worker nodes
|
|||||||
system interface-network-assign $NODE mgmt0 cluster-host
|
system interface-network-assign $NODE mgmt0 cluster-host
|
||||||
done
|
done
|
||||||
|
|
||||||
#. Configure data interfaces for worker nodes. Use the DATA port names, for
|
.. only:: openstack
|
||||||
example eth0, that are applicable to your deployment environment.
|
|
||||||
|
|
||||||
This step is optional for Kubernetes: Do this step if using |SRIOV| network
|
|
||||||
attachments in hosted application containers.
|
|
||||||
|
|
||||||
.. only:: starlingx
|
|
||||||
|
|
||||||
.. important::
|
|
||||||
|
|
||||||
This step is **required** for OpenStack.
|
|
||||||
|
|
||||||
* Configure the data interfaces.
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
# Execute the following lines with
|
|
||||||
export NODE=worker-0
|
|
||||||
# and then repeat with
|
|
||||||
export NODE=worker-1
|
|
||||||
|
|
||||||
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
|
||||||
# based on displayed linux port name, pci address and device type.
|
|
||||||
system host-port-list ${NODE}
|
|
||||||
|
|
||||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
|
||||||
# find the interfaces corresponding to the ports identified in previous step, and
|
|
||||||
# take note of their UUID
|
|
||||||
system host-if-list -a ${NODE}
|
|
||||||
|
|
||||||
# Modify configuration for these interfaces
|
|
||||||
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
|
||||||
|
|
||||||
# Create Data Networks
|
|
||||||
PHYSNET0='physnet0'
|
|
||||||
PHYSNET1='physnet1'
|
|
||||||
system datanetwork-add ${PHYSNET0} vlan
|
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
|
||||||
|
|
||||||
# Assign Data Networks to Data Interfaces
|
|
||||||
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${PHYSNET0}
|
|
||||||
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${PHYSNET1}
|
|
||||||
|
|
||||||
|
|
||||||
* To enable using |SRIOV| network attachments for the above interfaces in
|
|
||||||
Kubernetes hosted application containers:
|
|
||||||
|
|
||||||
* Configure |SRIOV| device plug in:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
for NODE in worker-0 worker-1; do
|
|
||||||
system host-label-assign ${NODE} sriovdp=enabled
|
|
||||||
done
|
|
||||||
|
|
||||||
* If planning on running |DPDK| in containers on this host, configure the
|
|
||||||
number of 1G Huge pages required on both |NUMA| nodes:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
for NODE in worker-0 worker-1; do
|
|
||||||
system host-memory-modify ${NODE} 0 -1G 100
|
|
||||||
system host-memory-modify ${NODE} 1 -1G 100
|
|
||||||
done
|
|
||||||
|
|
||||||
|
|
||||||
.. only:: starlingx
|
|
||||||
|
|
||||||
*************************************
|
*************************************
|
||||||
OpenStack-specific host configuration
|
OpenStack-specific host configuration
|
||||||
@ -332,7 +266,7 @@ Configure worker nodes
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
**This step is required only if the StarlingX OpenStack application
|
**These steps are required only if the StarlingX OpenStack application
|
||||||
(stx-openstack) will be installed.**
|
(stx-openstack) will be installed.**
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||||
@ -340,18 +274,18 @@ Configure worker nodes
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for NODE in worker-0 worker-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-label-assign $NODE openstack-compute-node=enabled
|
system host-label-assign $NODE openstack-compute-node=enabled
|
||||||
system host-label-assign $NODE openvswitch=enabled
|
system host-label-assign $NODE openvswitch=enabled
|
||||||
system host-label-assign $NODE sriov=enabled
|
system host-label-assign $NODE sriov=enabled
|
||||||
done
|
done
|
||||||
|
|
||||||
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
||||||
|
|
||||||
**If using OVS-DPDK vswitch, run the following commands:**
|
**If using OVS-DPDK vswitch, run the following commands:**
|
||||||
|
|
||||||
Default recommendation for worker node is to use a single core on each
|
Default recommendation for worker node is to use a single core on each
|
||||||
numa-node for |OVS|-|DPDK| vswitch. This should have been automatically
|
numa-node for |OVS|-|DPDK| vswitch. This should have been automatically
|
||||||
configured, if not run the following command.
|
configured, if not run the following command.
|
||||||
|
|
||||||
::
|
::
|
||||||
@ -405,21 +339,141 @@ Configure worker nodes
|
|||||||
|
|
||||||
done
|
done
|
||||||
|
|
||||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
|
||||||
which is needed for stx-openstack nova ephemeral disks.
|
needed for stx-openstack nova ephemeral disks.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for NODE in worker-0 worker-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring Nova local for: $NODE"
|
echo "Configuring Nova local for: $NODE"
|
||||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||||
PARTITION_SIZE=10
|
PARTITION_SIZE=10
|
||||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||||
system host-lvg-add ${NODE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
done
|
done
|
||||||
|
|
||||||
|
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
||||||
|
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||||
|
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||||
|
underlying assigned Data Network.
|
||||||
|
|
||||||
|
.. important::
|
||||||
|
|
||||||
|
A compute-labeled worker host **MUST** have at least one Data class interface.
|
||||||
|
|
||||||
|
* Configure the data interfaces for worker nodes.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
# Execute the following lines with
|
||||||
|
export NODE=worker-0
|
||||||
|
# and then repeat with
|
||||||
|
export NODE=worker-1
|
||||||
|
|
||||||
|
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
||||||
|
# based on displayed linux port name, pci address and device type.
|
||||||
|
system host-port-list ${NODE}
|
||||||
|
|
||||||
|
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||||
|
# find the interfaces corresponding to the ports identified in previous step, and
|
||||||
|
# take note of their UUID
|
||||||
|
system host-if-list -a ${NODE}
|
||||||
|
|
||||||
|
# Modify configuration for these interfaces
|
||||||
|
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
||||||
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
||||||
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
||||||
|
|
||||||
|
# Create Data Networks that vswitch 'data' interfaces will be connected to
|
||||||
|
DATANET0='datanet0'
|
||||||
|
DATANET1='datanet1'
|
||||||
|
system datanetwork-add ${DATANET0} vlan
|
||||||
|
system datanetwork-add ${DATANET1} vlan
|
||||||
|
|
||||||
|
# Assign Data Networks to Data Interfaces
|
||||||
|
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
|
||||||
|
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
*****************************************
|
||||||
|
Optionally Configure PCI-SRIOV Interfaces
|
||||||
|
*****************************************
|
||||||
|
|
||||||
|
#. **Optionally**, configure pci-sriov interfaces for worker nodes.
|
||||||
|
|
||||||
|
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
|
||||||
|
network attachments in hosted application containers.
|
||||||
|
|
||||||
|
.. only:: openstack
|
||||||
|
|
||||||
|
This step is **optional** for OpenStack. Do this step if using |SRIOV|
|
||||||
|
vNICs in hosted application VMs. Note that pci-sriov interfaces can
|
||||||
|
have the same Data Networks assigned to them as vswitch data interfaces.
|
||||||
|
|
||||||
|
|
||||||
|
* Configure the pci-sriov interfaces for worker nodes.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
# Execute the following lines with
|
||||||
|
export NODE=worker-0
|
||||||
|
# and then repeat with
|
||||||
|
export NODE=worker-1
|
||||||
|
|
||||||
|
# List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces,
|
||||||
|
# based on displayed linux port name, pci address and device type.
|
||||||
|
system host-port-list ${NODE}
|
||||||
|
|
||||||
|
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||||
|
# find the interfaces corresponding to the ports identified in previous step, and
|
||||||
|
# take note of their UUID
|
||||||
|
system host-if-list -a ${NODE}
|
||||||
|
|
||||||
|
# Modify configuration for these interfaces
|
||||||
|
# Configuring them as ‘pci-sriov’ class interfaces, MTU of 1500 and named sriov#
|
||||||
|
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
||||||
|
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
||||||
|
|
||||||
|
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
|
||||||
|
DATANET0='datanet0'
|
||||||
|
DATANET1='datanet1'
|
||||||
|
system datanetwork-add ${DATANET0} vlan
|
||||||
|
system datanetwork-add ${DATANET1} vlan
|
||||||
|
|
||||||
|
# Assign Data Networks to PCI-SRIOV Interfaces
|
||||||
|
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
|
||||||
|
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
|
||||||
|
* To enable using |SRIOV| network attachments for the above interfaces in
|
||||||
|
Kubernetes hosted application containers:
|
||||||
|
|
||||||
|
* Configure the Kubernetes |SRIOV| device plugin.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
for NODE in worker-0 worker-1; do
|
||||||
|
system host-label-assign $NODE sriovdp=enabled
|
||||||
|
done
|
||||||
|
|
||||||
|
* If planning on running |DPDK| in Kubernetes hosted application
|
||||||
|
containers on this host, configure the number of 1G Huge pages required
|
||||||
|
on both |NUMA| nodes.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
for NODE in worker-0 worker-1; do
|
||||||
|
|
||||||
|
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
|
||||||
|
system host-memory-modify -f application $NODE 0 -1G 10
|
||||||
|
|
||||||
|
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
|
||||||
|
system host-memory-modify -f application $NODE 1 -1G 10
|
||||||
|
|
||||||
|
done
|
||||||
|
|
||||||
|
|
||||||
-------------------
|
-------------------
|
||||||
Unlock worker nodes
|
Unlock worker nodes
|
||||||
|
Loading…
x
Reference in New Issue
Block a user