Container version updates

Container version updates for r8.
Add partial vran updates.
Resolve merge conflict.

Signed-off-by: Ron Stone <ronald.stone@windriver.com>
Change-Id: Iee462f3d3a9c62a5e526f12ab65cb7827d19e00b
This commit is contained in:
Ron Stone 2022-12-15 09:53:52 -05:00
parent 06c21396f0
commit 28e283b1c3
6 changed files with 31 additions and 31 deletions

@ -15,4 +15,4 @@
additional_local_registry_images: additional_local_registry_images:
- quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 - quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11
- docker.io/starlingx/ceph-config-helper:v1.15.0 - docker.io/starlingx/ceph-config-helper:ubuntu_bionic-20220802

@ -234,7 +234,7 @@ You can install |O-RAN| O2 application on |prod| from the command line.
EOF EOF
To deploy other versions of an image required for a quick solution, to To deploy other versions of an image required for a quick solution, to
have early access to the features (eg. oranscinf/pti-o2imsdms:2.0.1), and have early access to the features (eg. oranscinf/pti-o2imsdms:2.0.0), and
to authenticate images that are hosted by a private registry, follow the to authenticate images that are hosted by a private registry, follow the
steps below: steps below:
@ -260,7 +260,7 @@ You can install |O-RAN| O2 application on |prod| from the command line.
serviceaccountname: admin-oran-o2 serviceaccountname: admin-oran-o2
images: images:
tags: tags:
o2service: ${O2SERVICE_IMAGE_REG}/docker.io/oranscinf/pti-o2imsdms:2.0.1 o2service: ${O2SERVICE_IMAGE_REG}/docker.io/oranscinf/pti-o2imsdms:2.0.0
postgres: ${O2SERVICE_IMAGE_REG}/docker.io/library/postgres:9.6 postgres: ${O2SERVICE_IMAGE_REG}/docker.io/library/postgres:9.6
redis: ${O2SERVICE_IMAGE_REG}/docker.io/library/redis:alpine redis: ${O2SERVICE_IMAGE_REG}/docker.io/library/redis:alpine
pullPolicy: IfNotPresent pullPolicy: IfNotPresent

@ -41,7 +41,7 @@ subcloud, the subcloud installation has these phases:
.. _installing-a-subcloud-using-redfish-platform-management-service-ul-g5j-3f3-qjb: .. _installing-a-subcloud-using-redfish-platform-management-service-ul-g5j-3f3-qjb:
- The docker **rvmc** image needs to be added to the System Controller - The docker **rvmc** image needs to be added to the System Controller
bootstrap override file, ``docker.io/starlingx/rvmc:stx.5.0-v1.0.0``. bootstrap override file, ``docker.io/starlingx/rvmc:stx.8.0-v1.0.1``.
- A new system CLI option ``--active`` is added to the - A new system CLI option ``--active`` is added to the
:command:`load-import` command to allow the import into the :command:`load-import` command to allow the import into the

@ -246,7 +246,7 @@ CLIs and Clients for an admin user with cluster-admin clusterrole.
.. parsed-literal:: .. parsed-literal::
$ ./configure_client.sh -t platform -r admin_openrc.sh -k admin-kubeconfig -w $HOME/remote_cli_wd -p <wind-river-registry-url>/docker.io/starlingx/stx-platformclients:stx.8.0-v1.5.9 $ ./configure_client.sh -t platform -r admin_openrc.sh -k admin-kubeconfig -w HOME/remote_cli_wd -p |registry-url|/starlingx/stx-platformclients:stx.8.0-v1.5.9
If you specify repositories that require authentication, as shown If you specify repositories that require authentication, as shown
above, you must first perform a :command:`docker login` to that above, you must first perform a :command:`docker login` to that
@ -301,7 +301,7 @@ CLIs and Clients for an admin user with cluster-admin clusterrole.
.. parsed-literal:: .. parsed-literal::
$ ./configure_client.sh -t platform -r admin_openrc.sh -k admin-kubeconfig -w $HOME/remote_cli_wd -p <wind-river-registry-url>/docker.io/starlingx/stx-platformclients:stx.8.0-v1.5.9 $ ./configure_client.sh -t platform -r admin-openrc.sh -k admin-kubeconfig -w $HOME/remote_cli_wd -p |registry-url|/starlingx/stx-platformclients:stx.8.0-v1.5.9
If you specify repositories that require authentication, you must first If you specify repositories that require authentication, you must first
perform a :command:`docker login` to that repository before using perform a :command:`docker login` to that repository before using

@ -180,7 +180,7 @@ and clients for a non-admin user.
passed as arguments to the remote |CLI| commands need to be in this passed as arguments to the remote |CLI| commands need to be in this
directory in order for the container to access the files. The directory in order for the container to access the files. The
default value is the directory from which the default value is the directory from which the
:command:`configure\_client.sh` command was run. :command:`configure_client.sh` command was run.
**-p** **-p**
Override the container image for the platform |CLI| and clients. Override the container image for the platform |CLI| and clients.
@ -192,7 +192,7 @@ and clients for a non-admin user.
.. parsed-literal:: .. parsed-literal::
$ ./configure_client.sh -t platform -r my-openrc.sh -k user-kubeconfig -w $HOME/remote_cli_wd -p |registry-url|/starlingx/stx-platformclients:stx.5.0-v1.4.3 $ ./configure_client.sh -t platform -r my-openrc.sh -k user-kubeconfig -w $HOME/remote_cli_wd -p |registry-url|/starlingx/stx-platformclients:stx.8.0-v1.5.9
If you specify repositories that require authentication, you must If you specify repositories that require authentication, you must
perform a :command:`docker login` to that repository before using perform a :command:`docker login` to that repository before using
@ -232,4 +232,4 @@ See :ref:`Using Container-backed Remote CLIs and Clients
* :ref:`Installing Kubectl and Helm Clients Directly on a Host * :ref:`Installing Kubectl and Helm Clients Directly on a Host
<kubernetes-user-tutorials-installing-kubectl-and-helm-clients-directly-on-a-host>` <kubernetes-user-tutorials-installing-kubectl-and-helm-clients-directly-on-a-host>`
* :ref:`Configuring Remote Helm Client <configuring-remote-helm-client>` * :ref:`Configuring Remote Helm Client <configuring-remote-helm-client>`

@ -5,7 +5,7 @@ vRAN Tools
========== ==========
The following open-source |vRAN| tools are delivered in the following container The following open-source |vRAN| tools are delivered in the following container
image, ``docker.io/starlingx/stx-centos-tools-dev:stx.7.0-v1.0.1``: image, ``docker.io/starlingx/stx-debian-tools-dev:stx.8.0-v1.0.3``:
- ``dmidecode`` - ``dmidecode``
@ -32,7 +32,7 @@ a Kubernetes pod and ``exec`` into a shell in the container in order to execute
the commands. The Kubernetes pod must run in a privileged and host context, the commands. The Kubernetes pod must run in a privileged and host context,
such that the above tools provide information on resources in the host context. such that the above tools provide information on resources in the host context.
The suggested yaml manifest to launch the ``stx-centos-tools-dev`` container is The suggested yaml manifest to launch the ``stx-debian-tools-dev`` container is
as follows: as follows:
.. code-block:: none .. code-block:: none
@ -40,20 +40,20 @@ as follows:
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
name: stx-centos-tools name: stx-debian-tools
spec: spec:
replicas: 1 replicas: 1
selector: selector:
matchLabels: matchLabels:
app: stx-centos-tools app: stx-debian-tools
template: template:
metadata: metadata:
labels: labels:
app: stx-centos-tools app: stx-debian-tools
spec: spec:
containers: containers:
- name: stx-centos-tools - name: stx-debian-tools
image: docker.io/starlingx/stx-centos-tools-dev:stx.7.0-v1.0.1 image: docker.io/starlingx/stx-debian-tools-dev:stx.8.0-v1.0.3
imagePullPolicy: Always imagePullPolicy: Always
stdin: true stdin: true
tty: true tty: true
@ -79,19 +79,19 @@ For example:
.. code-block:: none .. code-block:: none
# Create pod # Create pod
~(keystone_admin)] $ kubectl apply -f stx-centos-tools.yaml ~(keystone_admin)] $ kubectl apply -f stx-debian-tools.yaml
# Get the running pods # Get the running pods
~(keystone_admin)] $ kubectl get pods ~(keystone_admin)] $ kubectl get pods
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
stx-centos-tools 1/1 Running 0 6s stx-debian-tools 1/1 Running 0 6s
Then ``exec`` into shell in container: Then ``exec`` into shell in container:
.. code-block:: none .. code-block:: none
# Attach to pod # Attach to pod
~(keystone_admin)] $ kubectl exec -it stx-centos-tools -- bash ~(keystone_admin)] $ kubectl exec -it stx-debian-tools -- bash
[root@controller-0 /]# [root@controller-0 /]#
[root@controller-0 /]# [root@controller-0 /]#
@ -99,7 +99,7 @@ Then ``exec`` into shell in container:
Build, deploy and run non-open-source tools Build, deploy and run non-open-source tools
------------------------------------------- -------------------------------------------
The ``docker.io/starlingx/stx-centos-tools-dev:stx.7.0-v1.0.1`` container image The ``docker.io/starlingx/stx-debian-tools-dev:stx.8.0-v1.0.3`` container image
also contains the |prod| development tools. also contains the |prod| development tools.
Using this container as your base image, this enables the |prod| user to build Using this container as your base image, this enables the |prod| user to build
@ -123,7 +123,7 @@ Running on Kubernetes:
# Creating the Dockerfile # Creating the Dockerfile
cat << EOF > Dockerfile cat << EOF > Dockerfile
FROM docker.io/starlingx/stx-centos-tools-dev:stx.7.0-v1.0.1 FROM docker.io/starlingx/stx-debian-tools-dev:stx.7.0-v1.0.1
USER root USER root
WORKDIR /root WORKDIR /root
@ -139,20 +139,20 @@ Running on Kubernetes:
EOF EOF
# Building the image with Quartzville # Building the image with Quartzville
sudo docker build -t stx-centos-tools-quartzville . sudo docker build -t stx-debian-tools-quartzville .
# Create the yml for Kubernetes; note the additional mounting of the host kernel headers from the host # Create the yml for Kubernetes; note the additional mounting of the host kernel headers from the host
cat << EOF > stx-centos-tools-quartzville.yml cat << EOF > stx-debian-tools-quartzville.yml
apiVersion: v1 apiVersion: v1
kind: Pod kind: Pod
metadata: metadata:
name: stx-centos-tools-quartzville name: stx-debian-tools-quartzville
spec: spec:
hostNetwork: true hostNetwork: true
hostPID: true hostPID: true
containers: containers:
- name: stx-centos-tools-quartzville - name: stx-debian-tools-quartzville
image: registry.local:9001/public/stx-centos-tools-quartzville image: registry.local:9001/public/stx-debian-tools-quartzville
imagePullPolicy: Always imagePullPolicy: Always
stdin: true stdin: true
tty: true tty: true
@ -184,16 +184,16 @@ Running on Kubernetes:
sudo docker login -u admin -p <admin-keystone-user-password> registry.local:9001 sudo docker login -u admin -p <admin-keystone-user-password> registry.local:9001
# Tagging for local registry # Tagging for local registry
sudo docker tag stx-centos-tools-quartzville:latest registry.local:9001/public/stx-centos-tools-quartzville:latest sudo docker tag stx-debian-tools-quartzville:latest registry.local:9001/public/stx-debian-tools-quartzville:latest
# Push image to local registry # Push image to local registry
sudo docker push registry.local:9001/public/stx-centos-tools-quartzville:latest sudo docker push registry.local:9001/public/stx-debian-tools-quartzville:latest
# Create pod # Create pod
kubectl apply -f stx-centos-tools-quartzville.yml kubectl apply -f stx-debian-tools-quartzville.yml
# Attach to pod # Attach to pod
kubectl exec -it stx-centos-tools-quartzville -- scl enable devtoolset-9 /bin/bash kubectl exec -it stx-debian-tools-quartzville -- scl enable devtoolset-9 /bin/bash
# < execute testing with quartzville tool > # < execute testing with quartzville tool >
------- -------
@ -211,4 +211,4 @@ commands to uninstall Quartzville driver:
exit exit
# Delete the quartzville pod # Delete the quartzville pod
kubectl delete pods stx-centos-tools-quartzville kubectl delete pods stx-debian-tools-quartzville