docs/doc/source/usertasks/using-kubernetes-cpu-manager-static-policy.rst
Juanita-Balaraj 0c4aa91ca4 Updated Patch Set 5 to include review comments
Changed name of file to:
admin-application-commands-and-helm-overrides.rst

Updated Strings.txt

Updated formatting issues:
installing-and-running-cpu-manager-for-kubernetes.rst

Updated Patch Set 4 to include review comments

Admin Tasks Updated

Changed name of include file to:
isolating-cpu-cores-to-enhance-application-performance.rest

Change-Id: I0b354dda3c7f66da3a5d430839b5007a6a19cfad
Signed-off-by: Juanita-Balaraj <juanita.balaraj@windriver.com>
Signed-off-by: Stone <ronald.stone@windriver.com>
Signed-off-by: Juanita-Balaraj <juanita.balaraj@windriver.com>
2021-01-11 23:40:36 -05:00

3.0 KiB

Use Kubernetes CPU Manager Static Policy

You can launch a container pinned to a particular set of CPU cores using a Kubernetes CPU manager static policy.

You will need to enable this CPU management mechanism before applying a policy.

  1. Define a container running a CPU stress command.

    Note

    • The pod will be pinned to the allocated set of CPUs on the host and have exclusive use of those CPUs if <resource:request:cpu> is equal to <resource:cpulimit>.
    • Resource memory must also be specified for guaranteed resource allocation.
    • Processes within the pod can float across the set of CPUs allocated to the pod, unless the application in the pod explicitly pins them to a subset of the CPUs.

    For example:

    % cat <<EOF > stress-cpu-pinned.yaml
    apiVersion: v1
    kind: Pod
    metadata:
      name: stress-ng-cpu
    spec:
      containers:
      - name: stress-ng-app
        image: alexeiled/stress-ng
        imagePullPolicy: IfNotPresent
        command: ["/stress-ng"]
        args: ["--cpu", "10", "--metrics-brief", "-v"]
        resources:
          requests:
            cpu: 2
            memory: "2Gi"
          limits:
            cpu: 2
            memory: "2Gi"
      nodeSelector:
        kubernetes.io/hostname: worker-1
    EOF

    You will likely need to adjust some values shown above to reflect your deployment configuration. For example, on an AIO-SX or AIO-DX system. worker-1 would probably become controller-0 or controller-1.

    The significant addition to this definition in support of CPU pinning, is the resources section , which sets a CPU resource request and limit of 2.

  2. Apply the definition.

    % kubectl apply -f stress-cpu-pinned.yaml

    You can SSH to the worker node and run top, and type '1' to see CPU details per core:

  3. Describe the pod or node to see the CPU Request, CPU Limits and that it is in the "Guaranteed" QoS Class.

    For example:

    % kubectl describe <node>
    Namespace                  Name           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
    ---------                  ----           ------------  ----------  ---------------  -------------  ---
    default                    stress-ng-cpu  2 (15%)       2 (15%)     2Gi (7%)         2Gi (7%)       9m31s
    
    % kubectl describe <pod> stress-ng-cpu
    ...
    QoS Class: Guaranteed
  4. Delete the container.

    % kubectl delete -f stress-cpu-pinned.yaml