How to change log level of Kubernetes components

Masato Naka
5 min readMar 18, 2023

--

Introduction

In this post, I’ll share how to change the log level of Kubernetes components. This is a very useful skill especially for debugging control plane component in the production or studying each component in dev or local cluster.

In this post, I’ll use kind for a Kubernetes cluster.

Create a kind Cluster

You can create a local Kubernetes cluster with the following command:

kind create cluster

Now you can access to the cluster with kubectl

kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:59680
CoreDNS is running at https://127.0.0.1:59680/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Create ClusterRole and RoleBinding

To change log level of Kubernetes components, we need the following permissions:

  • update permission for nodes/proxy resource for kubelet
  • put permission for /debug/flags/v resource for other components

First, we create ClusterRole that defines the necessary permissions above

cat << EOT | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: edit-debug-flags-v
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
verbs:
- update
- nonResourceURLs:
- /debug/flags/v
verbs:
- put
EOT

And then create ClusterRoleBinding to bind the created ClusterRole to the default service account this time.

cat << EOT | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: edit-debug-flags-v
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit-debug-flags-v
subjects:
- kind: ServiceAccount
name: default
namespace: default
EOT

Create token for the service account

Let’s create a token for the service account, following the doc:

TOKEN=$(kubectl create token default)

Change the log level of each component

API server

Hit the endpoint via curl:

APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(@.name==\"kind-kind\")].cluster.server}")
curl -s -X PUT -d '5' $APISERVER/debug/flags/v --header "Authorization: Bearer $TOKEN" -k

kube-scheduler

Hit the endpoint after port-forwarding the port:

kubectl -n kube-system port-forward kube-scheduler-kind-control-plane 10259:10259
curl -s -X PUT -d '5' https://localhost:10259/debug/flags/v --header "Authorization: Bearer $TOKEN" -k

kubelet

Change the log level via docker exec

docker exec kind-control-plane curl -s -X PUT -d '5' https://localhost:10250/debug/flags/v --header "Authorization: Bearer $TOKEN" -k

You might see the following warning but you can ignore it.

Warning: resource configmaps/kube-proxy is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.

kube-proxy

It’s necessary to set enableProfiling to true.

kubectl -n kube-system get configmap kube-proxy -o yaml  | sed -e 's/enableProfiling: false/enableProfiling: true/' | kubectl apply -f -

And then rollout the daemonset:

kubectl -n kube-system rollout restart daemonset/kube-proxy

Now we can change the log level after port-fowarding the port:

kubectl port-forward -n kube-system $(kubectl get pod -n kube-system -l k8s-app=kube-proxy -o jsonpath='{.items[0].metadata.name}') 10249:10249
curl -s -XPUT -d '5' http://localhost:10249/debug/flags/v

kube-controller-manager

Same as other component, port forward:

kubectl -n kube-system port-forward kube-controller-manager-kind-control-plane 10257:10257

And then change the log level:

curl -s -X PUT -d '5' https://localhost:10257/debug/flags/v --header "Authorization: Bearer $TOKEN" -k

Check logs

Now you can see log level 5 in each component:

kube-controller-manager:

kubectl logs kube-controller-manager-kind-control-plane -n kube-system --tail=20

I0318 07:21:45.499107 1 graph_builder.go:635] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-scheduler, uid fbf542b0-1346-498f-b722-a739f3b7e586, event type update, virtual=false
I0318 07:21:45.499669 1 graph_builder.go:635] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-controller-manager, uid 9d742167-3b62-42ee-90f2-8cfd03043c06, event type update, virtual=false
I0318 07:21:47.156102 1 graph_builder.go:635] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace mysql-operator-system, name dfc6d3c2.nakamasato.com, uid e317df01-892b-4d4e-937d-0253b0251860, event type update, virtual=false
I0318 07:21:47.506791 1 graph_builder.go:635] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-scheduler, uid fbf542b0-1346-498f-b722-a739f3b7e586, event type update, virtual=false
I0318 07:21:47.507858 1 graph_builder.go:635] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-controller-manager, uid 9d742167-3b62-42ee-90f2-8cfd03043c06, event type update, virtual=false
I0318 07:21:47.508788 1 leaderelection.go:278] successfully renewed lease kube-system/kube-controller-manager
I0318 07:21:49.168698 1 graph_builder.go:635] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace mysql-operator-system, name dfc6d3c2.nakamasato.com, uid e317df01-892b-4d4e-937d-0253b0251860, event type update, virtual=false
I0318 07:21:49.516145 1 graph_builder.go:635] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-scheduler, uid fbf542b0-1346-498f-b722-a739f3b7e586, event type update, virtual=false
I0318 07:21:49.520567 1 graph_builder.go:635] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-controller-manager, uid 9d742167-3b62-42ee-90f2-8cfd03043c06, event type update, virtual=false
I0318 07:21:49.520865 1 leaderelection.go:278] successfully renewed lease kube-system/kube-controller-manager
I0318 07:21:50.465223 1 graph_builder.go:635] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-node-lease, name kind-control-plane, uid 0a2b39e6-4596-474f-a6a1-1394cdc70990, event type update, virtual=false
I0318 07:21:51.127438 1 pv_controller_base.go:612] resyncing PV controller
I0318 07:21:51.180724 1 graph_builder.go:635] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace mysql-operator-system, name dfc6d3c2.nakamasato.com, uid e317df01-892b-4d4e-937d-0253b0251860, event type update, virtual=false
I0318 07:21:51.196972 1 reflector.go:281] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0318 07:21:51.525816 1 graph_builder.go:635] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-scheduler, uid fbf542b0-1346-498f-b722-a739f3b7e586, event type update, virtual=false
I0318 07:21:51.528022 1 graph_builder.go:635] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-controller-manager, uid 9d742167-3b62-42ee-90f2-8cfd03043c06, event type update, virtual=false
I0318 07:21:51.528167 1 leaderelection.go:278] successfully renewed lease kube-system/kube-controller-manager
I0318 07:21:52.643227 1 pathrecorder.go:241] controller-manager: "/healthz" satisfied by exact match
I0318 07:21:52.643346 1 pathrecorder.go:241] healthz: "/healthz" satisfied by exact match
I0318 07:21:52.643894 1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="1.007834ms" userAgent="kube-probe/1.25" audit-ID="" srcIP="127.0.0.1:59426" resp=200

kube-api-server:

kubectl logs kube-apiserver-kind-control-plane -n kube-system --tail=20
I0318 07:21:01.667666 1 shared_informer.go:285] caches populated
I0318 07:21:01.667671 1 shared_informer.go:285] caches populated
I0318 07:21:01.667674 1 shared_informer.go:285] caches populated
I0318 07:21:01.667676 1 shared_informer.go:285] caches populated
I0318 07:21:01.667679 1 shared_informer.go:285] caches populated
I0318 07:21:01.667682 1 shared_informer.go:285] caches populated
I0318 07:21:01.667691 1 shared_informer.go:285] caches populated
I0318 07:21:01.667696 1 shared_informer.go:285] caches populated
I0318 07:21:01.667967 1 httplog.go:131] "HTTP" verb="GET" URI="/readyz" latency="2.641042ms" userAgent="kube-probe/1.25" audit-ID="b6ad2aaa-ee35-4fa5-a5b5-a929eec966d6" srcIP="172.18.0.2:36606" apf_pl="exempt" apf_fs="probes" apf_execution_time="2.422291ms" resp=200
I0318 07:21:01.699198 1 handler.go:153] apiextensions-apiserver: GET "/openapi/v2" satisfied by nonGoRestful
I0318 07:21:01.699287 1 pathrecorder.go:241] apiextensions-apiserver: "/openapi/v2" satisfied by exact match
I0318 07:21:01.731700 1 handler.go:153] kube-apiserver: GET "/openapi/v2" satisfied by nonGoRestful
I0318 07:21:01.731751 1 pathrecorder.go:241] kube-apiserver: "/openapi/v2" satisfied by exact match
I0318 07:21:01.953677 1 handler.go:153] kube-aggregator: GET "/api/v1/namespaces/kube-system/pods/kube-apiserver-kind-control-plane" satisfied by nonGoRestful
I0318 07:21:01.953704 1 pathrecorder.go:248] kube-aggregator: "/api/v1/namespaces/kube-system/pods/kube-apiserver-kind-control-plane" satisfied by prefix /api/
I0318 07:21:01.953711 1 handler.go:143] kube-apiserver: GET "/api/v1/namespaces/kube-system/pods/kube-apiserver-kind-control-plane" satisfied by gorestful with webservice /api/v1
I0318 07:21:01.960404 1 httplog.go:131] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system/pods/kube-apiserver-kind-control-plane" latency="7.387666ms" userAgent="kubectl/v1.25.2 (darwin/arm64) kubernetes/5835544" audit-ID="325238a0-718f-4d0a-81e2-c7c49ddc1f72" srcIP="172.18.0.1:61556" apf_pl="exempt" apf_fs="exempt" apf_execution_time="7.188708ms" resp=200
I0318 07:21:01.966574 1 handler.go:153] kube-aggregator: GET "/api/v1/namespaces/kube-system/pods/kube-apiserver-kind-control-plane/log" satisfied by nonGoRestful
I0318 07:21:01.966604 1 pathrecorder.go:248] kube-aggregator: "/api/v1/namespaces/kube-system/pods/kube-apiserver-kind-control-plane/log" satisfied by prefix /api/
I0318 07:21:01.966620 1 handler.go:143] kube-apiserver: GET "/api/v1/namespaces/kube-system/pods/kube-apiserver-kind-control-plane/log" satisfied by gorestful with webservice /api/v1

References

--

--

Masato Naka

An SRE engineer, mainly working on Kubernetes. CKA (Feb 2021). His Interests include Cloud-Native application development, and machine learning.