docs/doc/source/security/kubernetes/security-install-kubectl-and-helm-clients-directly-on-a-host.rst
Adil b1984f4543 Protocols update and renaming
Renamed admin-user to kubernetes-admin in examples
Changed 'SystemController' to System Controller'
Updated protocol and added new one 31090-31099

Acted on Mary's comments

https://review.opendev.org/c/starlingx/docs/+/791816

Story: TBD
Task: TBD



Signed-off-by: Adil <mohamed.adilassakkali@windriver.com>
Change-Id: I5319dfc04033c2748c60038a9bff428ef6f98c98
2021-06-01 12:49:18 -03:00

6.7 KiB

Install Kubectl and Helm Clients Directly on a Host

You can use kubectl and helm to interact with a controller from a remote system.

Commands such as those that reference local files or commands that require a shell are more easily used from clients running directly on a remote workstation.

Complete the following steps to install kubectl and helm on a remote system.

The following procedure shows how to configure the kubectl and helm clients directly on remote host, for an admin user with cluster-admin cluster role. If using a non-admin user such as one with only role privileges within a private namespace, the procedure is the same, however, additional configuration is required in order to use helm.

  1. On the controller, if an admin-user service account is not already available, create one.
    1. Create the admin-user service account in kube-system namespace and bind the cluster-admin ClusterRoleBinding to this user.

      % cat <<EOF > admin-login.yaml
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: kubernetes-admin
        namespace: kube-system
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: kubernetes-admin
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: cluster-admin
      subjects:
      - kind: ServiceAccount
        name: kubernetes-admin
        namespace: kube-system
      EOF
      % kubectl apply -f admin-login.yaml
    2. Retrieve the secret token.

      ~(keystone_admin)]$ TOKEN_DATA=$(kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kubernetes-admin | awk '{print $1}') | grep "token:" | awk '{print $2}')
  2. On a remote workstation, install the kubectl client. Go to the following link: https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/.
    1. Install the kubectl client CLI (for example, an Ubuntu host).

      % sudo apt-get update
      % sudo apt-get install -y apt-transport-https
      % curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | \
      sudo apt-key add
      % echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | \
      sudo tee -a /etc/apt/sources.list.d/kubernetes.list
      % sudo apt-get update
      % sudo apt-get install -y kubectl
    2. Set up the local configuration and context.

      Note

      In order for your remote host to trust the certificate used by the K8S API, you must ensure that the k8s\_root\_ca\_cert specified at install time is a trusted certificate by your host. Follow the instructions for adding a trusted certificate for the operating system distribution of your particular host.

      If you did not specify a k8s\_root\_ca\_cert at install time, then specify --insecure-skip-tls-verify, as shown below.

      The following example configures the default ~/.kube/config. See the following reference: https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/. You need to obtain a floating IP.

      % kubectl config set-cluster mycluster --server=https://${OAM_IP}:6443 \
      --insecure-skip-tls-verify
      % kubectl config set-credentials kubernetes-admin@mycluster --token=$TOKEN_DATA
      % kubectl config set-context kubernetes-admin@mycluster --cluster=mycluster \
      --user kubernetes-admin@mycluster --namespace=default
      % kubectl config use-context kubernetes-admin@mycluster

      $TOKEN_DATA is the token retrieved in step 1.

    3. Test remote kubectl access.

      % kubectl get nodes -o wide
      NAME           STATUS   ROLES    AGE    VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE ...
      controller-0   Ready    master   15h    v1.12.3   192.168.204.3     <none>        CentOS L ...
      controller-1   Ready    master   129m   v1.12.3   192.168.204.4     <none>        CentOS L ...
      worker-0       Ready    <none>   99m    v1.12.3   192.168.204.201   <none>        CentOS L ...
      worker-1       Ready    <none>   99m    v1.12.3   192.168.204.202   <none>        CentOS L ...
      %
  3. On the workstation, install the helm client on an Ubuntu host by taking the following actions on the remote Ubuntu system.
    1. Install helm. See the following reference: https://helm.sh/docs/intro/install/. Helm accesses the Kubernetes cluster as configured in the previous step, using the default ~/.kube/config.

      % wget https://get.helm.sh/helm-v3.2.1-linux-amd64.tar.gz
      % tar xvf helm-v3.2.1-linux-amd64.tar.gz
      % sudo cp linux-amd64/helm /usr/local/bin
    2. Verify that helm installed correctly.

      % helm version
      version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}
    3. Run the following commands:

      % helm repo add bitnami https://charts.bitnami.com/bitnami
      % helm repo update
      % helm repo list
      % helm search repo
      % helm install wordpress bitnami/wordpress

Configure Container-backed Remote CLIs and Clients <security-configure-container-backed-remote-clis-and-clients>

Using Container-backed Remote CLIs and Clients <using-container-backed-remote-clis-and-clients>

Configure Remote Helm v2 Client <configure-remote-helm-client-for-non-admin-users>