.. begin-aio-dx-install-verify-ip-connectivity

External connectivity is required to run the Ansible bootstrap
playbook. The StarlingX boot image will |DHCP| out all interfaces
so the server may have obtained an IP address and have external IP
connectivity if a |DHCP| server is present in your environment.
Verify this using the :command:`ip addr` and :command:`ping
8.8.8.8` command.

Otherwise, manually configure an IP address and default IP route.
Use the ``PORT``, ``IP-ADDRESS``/``SUBNET-LENGTH`` and
``GATEWAY-IP-ADDRESS`` applicable to your deployment environment.

.. code-block:: bash

   sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
   sudo ip link set up dev <PORT>
   sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
   ping 8.8.8

.. end-aio-dx-install-verify-ip-connectivity



.. begin-config-controller-0-oam-interface-dx

The following example configures the |OAM| interface on a physical
untagged ethernet port, use |OAM| port name that is applicable to
your deployment environment, for example eth0:

.. code-block:: none

    ~(keystone_admin)$ OAM_IF=<OAM-PORT>
    ~(keystone_admin)$ system host-if-modify controller-0 $OAM_IF -c platform
    ~(keystone_admin)$ system interface-network-assign controller-0 $OAM_IF oam

.. end-config-controller-0-oam-interface-dx



.. begin-config-controller-0-ntp-interface-dx

.. code-block:: none

    ~(keystone_admin)$ system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org

To configure |PTP| instead of |NTP|, see |ptp-server-config-index|.

.. end-config-controller-0-ntp-interface-dx



.. begin-config-controller-0-OS-k8s-sriov-dx

* Configure the Kubernetes |SRIOV| device plugin.

.. code-block:: none

   ~(keystone_admin)$ system host-label-assign controller-0 sriovdp=enabled

* If you are planning on running |DPDK| in Kubernetes hosted application
  containers on this host, configure the number of 1G Huge pages required on
  both |NUMA| nodes.

  .. code-block:: bash

     # assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
     ~(keystone_admin)$ system host-memory-modify -f application controller-0 0 -1G 10

     # assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
     ~(keystone_admin)$ system host-memory-modify -f application controller-0 1 -1G 10

.. end-config-controller-0-OS-k8s-sriov-dx


.. begin-power-on-controller-1-server-dx

Power on the controller-1 server and force it to network boot with
the appropriate BIOS boot options for your particular server.

.. end-power-on-controller-1-server-dx



.. begin-config-controller-1-server-oam-dx

The following example configures the |OAM| interface on a physical untagged
ethernet port, use  the |OAM| port name that is applicable to your
deployment environment, for example eth0:

.. code-block:: none

   ~(keystone_admin)$ OAM_IF=<OAM-PORT>
   ~(keystone_admin)$ system host-if-modify controller-1 $OAM_IF -c platform
   ~(keystone_admin)$ system interface-network-assign controller-1 $OAM_IF oam

.. end-config-controller-1-server-oam-dx


.. begin-config-k8s-sriov-controller-1-dx

* Configure the Kubernetes |SRIOV| device plugin.

  .. code-block:: bash

     ~(keystone_admin)$ system host-label-assign controller-1 sriovdp=enabled

* If planning on running |DPDK| in Kubernetes hosted application
  containers on this host, configure the number of 1G Huge pages required
  on both |NUMA| nodes.

  .. code-block:: bash

     # assign 10x 1G huge page on processor/numa-node 0 on controller-1 to applications
     ~(keystone_admin)$ system host-memory-modify -f application controller-1 0 -1G 10

     # assign 10x 1G huge page on processor/numa-node 1 on controller-1 to applications
     ~(keystone_admin)$ system host-memory-modify -f application controller-1 1 -1G 10

.. end-config-k8s-sriov-controller-1-dx


.. begin-install-sw-on-workers-power-on-dx

Power on the worker node servers and force them to network boot with the
appropriate BIOS boot options for your particular server.

.. end-install-sw-on-workers-power-on-dx



.. begin-os-specific-host-config-sriov-dx

* Configure the Kubernetes |SRIOV| device plugin.

  .. code-block:: bash

     for NODE in worker-0 worker-1; do
        system host-label-assign $NODE sriovdp=enabled
     done

* If planning on running |DPDK| in Kubernetes hosted application containers on
  this host, configure the number of 1G Huge pages required on both |NUMA|
  nodes.

  .. code-block:: bash

     for NODE in worker-0 worker-1; do
        # assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
        ~(keystone_admin)$ system host-memory-modify -f application $NODE 0 -1G 10

        # assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
        ~(keystone_admin)$ system host-memory-modify -f application $NODE 1 -1G 10
     done

.. end-os-specific-host-config-sriov-dx



.. begin-config-controller-0-OS-add-cores-dx

A minimum of 4 platform cores are required, 6 platform cores are
recommended.

Increase the number of platform cores with the following
commands. This example assigns 6 cores on processor/numa-node 0

on controller-0 to platform.

.. code-block:: bash

   ~(keystone_admin)$ system host-cpu-modify -f platform -p0 6 controller-0

.. end-config-controller-0-OS-add-cores-dx




.. begin-config-controller-0-OS-vswitch-dx

To deploy |OVS-DPDK|, run the following command:

.. parsed-literal::

   ~(keystone_admin)$ system modify --vswitch_type |ovs-dpdk|

Default recommendation for an |AIO|-controller is to use a single core
for |OVS-DPDK| vSwitch.

.. code-block:: bash

   # assign 1 core on processor/numa-node 0 on controller-0 to vswitch
   ~(keystone_admin)$ system host-cpu-modify -f vswitch -p0 1 controller-0

Once vswitch_type is set to |OVS|-|DPDK|, any subsequent nodes
created will default to automatically assigning 1 vSwitch core
for AIO controllers and 2 vSwitch cores (both on numa-node 0;
physical NICs are typically on first numa-node) for
compute-labeled worker nodes.

When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
each |NUMA| node on the host. It is recommended
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
node on the host.

However, due to a limitation with Kubernetes, only a single huge page
size is supported on any one host. If your application |VMs| require 2M
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
memory on each |NUMA| node on the host.

.. code-block:: bash

   # Assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
   ~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-0 0

   # Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch
   ~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-0 1

.. important::

   |VMs| created in an |OVS-DPDK| environment must be configured to use
   huge pages to enable networking and must use a flavor with property:
   ``hw:mem_page_size=large``

   Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
   this host, the following commands are an example that assumes that 1G
   huge page size is being used on this host:

   .. code-block:: bash

      # assign 1x 1G huge page on processor/numa-node 0 on controller-0 to applications
      ~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-0 0

      # assign 1x 1G huge page on processor/numa-node 1 on controller-0 to applications
      ~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-0 1

.. note::

   After controller-0 is unlocked, changing vswitch_type requires
   locking and unlocking controller-0 to apply the change.

.. end-config-controller-0-OS-vswitch-dx




.. begin-config-controller-0-OS-add-fs-dx

.. note::

   Both cannot exist at the same time.

Add an 'instances' filesystem

.. code-block:: bash

   ~(keystone_admin)$ export NODE=controller-0

   # Create ‘instances’ filesystem
   ~(keystone_admin)$ system host-fs-add ${NODE} instances=<size>

Or add a 'nova-local' volume group:

.. code-block:: bash

   ~(keystone_admin)$ export NODE=controller-0

   # Create ‘nova-local’ local volume group
   ~(keystone_admin)$ system host-lvg-add ${NODE} nova-local

   # Get UUID of an unused DISK to to be added to the ‘nova-local’ volume
   # group. CEPH OSD Disks can NOT be used
   # List host’s disks and take note of UUID of disk to be used
   ~(keystone_admin)$ system host-disk-list ${NODE}

   # Add the unused disk to the ‘nova-local’ volume group
   ~(keystone_admin)$ system host-pv-add ${NODE} nova-local <DISK_UUID>

.. end-config-controller-0-OS-add-fs-dx



.. begin-config-controller-0-OS-data-interface-dx

.. code-block:: bash

   ~(keystone_admin)$  NODE=controller-0

   # List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
   # based on displayed linux port name, pci address and device type.
   ~(keystone_admin)$ system host-port-list ${NODE}

   # List host’s auto-configured ‘ethernet’ interfaces,
   # find the interfaces corresponding to the ports identified in previous step, and
   # take note of their UUID
   ~(keystone_admin)$ system host-if-list -a ${NODE}

   # Modify configuration for these interfaces
   # Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
   ~(keystone_admin)$ system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
   ~(keystone_admin)$ system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>

   # Create Data Networks that vswitch 'data' interfaces will be connected to
   ~(keystone_admin)$ DATANET0='datanet0'
   ~(keystone_admin)$ DATANET1='datanet1'

   # Assign Data Networks to Data Interfaces
   ~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
   ~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}

.. end-config-controller-0-OS-data-interface-dx



.. begin-increase-cores-controller-1-dx

Increase the number of platform cores with the following commands:

.. code-block::

   # assign 6 cores on processor/numa-node 0 on controller-1 to platform
   ~(keystone_admin)$ system host-cpu-modify -f platform -p0 6 controller-1

.. end-increase-cores-controller-1-dx



.. begin-config-vswitch-controller-1-dx

If using |OVS-DPDK| vswitch, run the following commands:
Default recommendation for an |AIO|-controller is to use a single core
for |OVS-DPDK| vSwitch. This should have been automatically configured,
if not run the following command.

.. code-block:: bash

   # assign 1 core on processor/numa-node 0 on controller-1 to vswitch
   ~(keystone_admin)$ system host-cpu-modify -f vswitch -p0 1 controller-1

When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
each |NUMA| node on the host. It is recommended
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
node on the host.

However, due to a limitation with Kubernetes, only a single huge page
size is supported on any one host. If your application VMs require 2M
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
memory on each |NUMA| node on the host.

.. code-block:: bash

   # assign 1x 1G huge page on processor/numa-node 0 on controller-1 to vswitch
   ~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-1 0

   # Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch
   ~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-1 1

.. important::

   |VMs| created in an |OVS-DPDK| environment must be configured to use
   huge pages to enable networking and must use a flavor with property:
   ``hw:mem_page_size=large``.

   Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
   this host, assuming 1G huge page size is being used on this host, with
   the following commands:

   .. code-block:: bash

      # assign 10x 1G huge page on processor/numa-node 0 on controller-1 to applications
      ~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-1 0

      # assign 10x 1G huge page on processor/numa-node 1 on controller-1 to applications
      ~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-1 1

.. end-config-vswitch-controller-1-dx



.. begin-config-fs-controller-1-dx

.. note::
   Both cannot exist at the same time.

* Add an 'instances' filesystem:

.. code-block:: bash

   ~(keystone_admin)$ export NODE=controller-1

   # Create ‘instances’ filesystem
   ~(keystone_admin)$ system host-fs-add ${NODE} instances=<size>

**Or**

* Add a 'nova-local' volume group:

.. code-block:: bash

   ~(keystone_admin)$ export NODE=controller-1

   # Create ‘nova-local’ local volume group
   ~(keystone_admin)$ system host-lvg-add ${NODE} nova-local

   # Get UUID of an unused DISK to to be added to the ‘nova-local’ volume
   # group. CEPH OSD Disks can NOT be used
   # List host’s disks and take note of UUID of disk to be used
   ~(keystone_admin)$ system host-disk-list ${NODE}

   # Add the unused disk to the ‘nova-local’ volume group
   ~(keystone_admin)$ system host-pv-add ${NODE} nova-local <DISK_UUID>

.. end-config-fs-controller-1-dx



.. begin-config-data-interfaces-controller-1-dx

.. code-block:: bash

   export NODE=controller-1

   # List inventoried host's ports and identify ports to be used as 'data' interfaces,
   # based on displayed linux port name, pci address and device type.
   system host-port-list ${NODE}

   # List host’s auto-configured ‘ethernet’ interfaces,
   # find the interfaces corresponding to the ports identified in previous step, and
   # take note of their UUID
   system host-if-list -a ${NODE}

   # Modify configuration for these interfaces
   # Configuring them as 'data' class interfaces, MTU of 1500 and named data#
   ~(keystone_admin)$ system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
   ~(keystone_admin)$ system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>

   # Create Data Networks that vswitch 'data' interfaces will be connected to
   ~(keystone_admin)$ DATANET0='datanet0'
   ~(keystone_admin)$ DATANET1='datanet1'

   # Assign Data Networks to Data Interfaces
   ~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
   ~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}

.. end-config-data-interfaces-controller-1-dx



.. begin-os-specific-host-config-data-dx

.. code-block:: bash

   # Execute the following lines with
   ~(keystone_admin)$ export NODE=worker-0

   # and then repeat with
   ~(keystone_admin)$ export NODE=worker-1

     # List inventoried host’s ports and identify ports to be used as `data` interfaces,
     # based on displayed linux port name, pci address and device type.
     ~(keystone_admin)$ system host-port-list ${NODE}

     # List host’s auto-configured ‘ethernet’ interfaces,
     # find the interfaces corresponding to the ports identified in previous step, and
     # take note of their UUID
     ~(keystone_admin)$ system host-if-list -a ${NODE}

     # Modify configuration for these interfaces
     # Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
     ~(keystone_admin)$ system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
     ~(keystone_admin)$ system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>

     # Create Data Networks that vswitch 'data' interfaces will be connected to
     ~(keystone_admin)$ DATANET0='datanet0'
     ~(keystone_admin)$ DATANET1='datanet1'
     ~(keystone_admin)$ system datanetwork-add ${DATANET0} vlan
     ~(keystone_admin)$ system datanetwork-add ${DATANET1} vlan

     # Assign Data Networks to Data Interfaces
     ~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
     ~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}

.. end-os-specific-host-config-data-dx


.. begin-os-specific-host-config-labels-dx

.. parsed-literal::

   for NODE in worker-0 worker-1; do
      system host-label-assign $NODE  openstack-compute-node=enabled
      kubectl taint nodes $NODE openstack-compute-node:NoSchedule
      system host-label-assign $NODE  |vswitch-label|
      system host-label-assign $NODE  sriov=enabled
   done

.. end-os-specific-host-config-labels-dx


.. begin-os-specific-host-config-vswitch-dx

If using |OVS-DPDK| vswitch, run the following commands:
Default recommendation for worker node is to use two cores on
numa-node 0 for |OVS-DPDK| vSwitch; physical |NICs| are
typically on first numa-node. This should have been
automatically configured, if not run the following command.

.. code-block:: bash

  for NODE in worker-0 worker-1; do
     # assign 2 cores on processor/numa-node 0 on worker-node to vswitch
     ~(keystone_admin)$ system host-cpu-modify -f vswitch -p0 2 $NODE
  done

When using |OVS-DPDK|, configure 1G of huge pages for vSwitch
memory on each |NUMA| node on the host. It is recommended to
configure 1x 1G huge page (-1G 1) for vSwitch memory on each
|NUMA| node on the host.

However, due to a limitation with Kubernetes, only a single huge
page size is supported on any one host. If your application VMs
require 2M huge pages, then configure 500x 2M huge pages (-2M
500) for vSwitch memory on each |NUMA| node on the host.

.. code-block:: bash

   for NODE in worker-0 worker-1; do
     # assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
     ~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 $NODE 0
     # assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
     ~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 $NODE 1
   done

.. important::

   |VMs| created in an |OVS-DPDK| environment must be configured
   to use huge pages to enable networking and must use a flavor
   with property: ``hw:mem_page_size=large``.

   Configure the huge pages for |VMs| in an |OVS-DPDK|
   environment on this host, assuming 1G huge page size is being
   used on this host, with the following commands:

   .. code-block:: bash

      for NODE in worker-0 worker-1; do
        # assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
        ~(keystone_admin)$ system host-memory-modify -f application -1G 10 $NODE 0
        # assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
        ~(keystone_admin)$ system host-memory-modify -f application -1G 10 $NODE 1
      done

.. end-os-specific-host-config-vswitch-dx