How Prometheus Operator facilitates Prometheus configuration updates

Masato Naka
8 min readFeb 4, 2022


1. Overview

Let’s begin with what we want to achieve in this post.

The goal: Update Prometheus configuration nicely!

To those who are not familiar with Prometheus, Prometheus is a commonly used cloud-native open-source monitoring tool, with which you can collect metrics from applications. When we add a new target to scrape metrics from, we need to update the Prometheus configuration. When updating the configuration, Prometheus needs to reload the latest configuration.

Update Prometheus Configuration

In this post, we’ll see how to update and reload the Prometheus configuration with and without Prometheus Operator so we understand how Prometheus Operator simplifies and automates the process of managing the configuration.


  1. ServiceMonitor and PodMonitor facilitate updating scrape config.
  2. prometheus-config-reloader automates reloading the configuration.

2. Update configuration

You can specify the Prometheus configuration file by passing it as an argument when starting a Prometheus server:

./prometheus -h
usage: prometheus [<flags>]
The Prometheus monitoring serverFlags:
-h, --help Show context-sensitive help (also try --help-long and --help-man).
--version Show application version.
Prometheus configuration file path.

2.1. Update Prometheus configuration in local

Updating the configuration file is simple when you run Prometheus locally.

How the new configuration file is reflected will be covered in the next section.

2.2. Update Prometheus configuration directly (in Kubernetes)

We need to write the configuration file manually following the documentation. To monitor applications running in a Kubernetes cluster, we can use the Kubernetes Service Discovery, a native Prometheus service discovery feature.

Sample prometheus.yml with kubernetes_sd_configs:

- job_name: 'prometheus-endpoints-role'
- role: endpoints
own_namespace: true
- monitoring

A configuration can be passed to a Pod using ConfigMap in Kubernetes. So we can write prometheus.yml in a ConfigMap either directly or with kustomize’s configMapGenerator.

- name: prometheus-config
- prometheus.yml=prometheus-with-endpoints-role.yml
disableNameSuffixHash: true

Sample codes are here:

kubectl apply -k

This command will create the following resources:

In either way, we directly change the configuration file prometheus.yml. As you can see from the documentation, if you write the configuration from scratch, it’s an exhausting task.

2.3. Update Prometheus configuration with Prometheus Operator

When you run Prometheus in a Kubernetes cluster, Prometheus Operator is a great option.

Prometheus Operator has two CustomResourceDefinitions to set Prometheus scrape configuration:

  1. PodMonitor
  2. ServiceMonitor

With those custom resources, you can easily define the Prometheus scrape config in a declarative way for Kubernetes Service Discovery.

Prometheus Operator monitors ServiceMonitor and PodMonitor and converts them into kubernetes_sd_config to update the scrape config.

Sample of ServiceMonitor:

kind: ServiceMonitor
name: prometheus-self
prometheus: prometheus
- interval: 30s
port: web
prometheus: prometheus


  1. selector.matchLabels specifies the labels to retrieve the scrape target Pods.
  2. endpoints specifies which port to get metrics from.

This specific ServiceMonitor is to monitor Prometheus itself as the name indicates.


In order to use Prometheus Operator, we need to install the operator and create a Prometheus server with the custom resource in advance.

Install Prometheus operator:

kubectl create -f

This command will install the following resources:

  1. 8 Custom Resource Definitions (CRDs) ( Prometheus, PodMonitor, ServiceMonitor, etc.)
  2. Deployment : prometheus-operator
  3. Service: prometheus-operator
  4. ServiceAccount: prometheus-operator
  5. ClusterRole& ClusterRoleBinding: prometheus-operator

Create a Prometheus Server and ServiceMonitor to monitor itself:

kubectl apply -k
  • Prometheus
  • Servicemonitor
  • Service
  • ServiceAccount
  • ClusterRole
  • ClusterRoleBinding

rbac.yaml defines the permissions to enable the Prometheus server to get, list, and, watch Kubernetes resources to retrieve the target Pods.

We’ll see two Pods running in the monitoring namespace.

kubectl get po -n monitoring
prometheus-prometheus-0 2/2 Running 0 43s
prometheus-prometheus-1 2/2 Running 0 43s

You can check the configuration with Prometheus UI:

kubectl port-forward -n monitoring svc/prometheus-operated 9090:9090

Open http://localhost:9090/config, you’ll see the generated configuration.

prometheus-self in scrape_config

You can see serviceMonitor/monitoring/prometheus-self/0 in scrape_config. This is part of what we have deployed. We deployed ServiceMonitor in the previous step, which is a way to configure a scrape config with Prometheus Operator.

You can create your own application and its ServiceMonitor or PodMonitor.


(I don’t explain the difference between ServiceMonitor and PodMonitor here. For this example, you can use either of them.)

3. Reload configuration

After updating the Prometheus configuration file e.g.prometheus.yml, Prometheus needs to reload it to reflect the changes.

Prometheus supports reloading its configuration in two ways:

by sending a SIGHUP to the Prometheus process


sending a HTTP POST request to the /-/reload endpoint (when the --web.enable-lifecycle flag is enabled).

Let’s try them out in the local environment!!

3.1. Reload Prometheus configuration with SIGHUP/HTTP request in local

1. Download Prometheus

2. Run Prometheus


3. Reload Prometheus Configuration by SIGHUP signal

by sending a SIGHUP to the Prometheus process

Check the process id

ps aux | grep prometheus
masato-naka 29490 0.0 0.0 4428152 900 s000 S+ 7:26AM 0:00.00 grep --color=auto --exclude-dir=.bzr --exclude-dir=CVS --exclude-dir=.git --exclude-dir=.hg --exclude-dir=.svn --exclude-dir=.idea --exclude-dir=.tox prometheus
masato-naka 28959 0.0 0.2 5126356 58948 s001 S+ 7:19AM 0:00.55 ./prometheus

Prometheus reloads the configuration when receiving the SIGHUP signal:

kill -HUP 28959

You’ll see logs similar to the following lines:

ts=2022-01-31T22:30:02.565Z caller=main.go:996 level=info msg="Loading configuration file" filename=prometheus.yml
ts=2022-01-31T22:30:02.594Z caller=main.go:1033 level=info msg="Completed loading of configuration file" filename=prometheus.yml totalDuration=29.356898ms db_storage=1.319µs remote_storage=1.567µs web_handler=420ns query_engine=926ns scrape=28.945623ms scrape_sd=38.075µs notify=37.607µs notify_sd=19.235µs rules=1.601µs

If you run Prometheus with systemctl, you can write ExecReload like the following:

ExecStart=/opt/prometheus/prometheus --config.file=/opt/prometheus/prometheus.yml
ExecReload=/bin/kill -HUP $MAINPID

4. Reload Prometheus Configuration by HTTP request

sending a HTTP POST request to the /-/reload endpoint (when the --web.enable-lifecycle flag is enabled)

Let’s also try the other way to trigger reloading.

If you send an HTTP request to the Prometheus server that you started in the previous step, you’ll get the following message: Lifecycle API is not enabled.

curl -X POST http://localhost:9090/-/reload
Lifecycle API is not enabled.

To trigger reloading by HTTP endpoint, we need to enable lifecycle API by starting Prometheus with --web.enable-lifecycle option.

./prometheus --web.enable-lifecycle

Send a POST request again:

curl -X POST http://localhost:9090/-/reload

You can see Prometheus successfully reloaded its configuration in its logs:

ts=2022-02-01T00:39:45.997Z caller=main.go:996 level=info msg="Loading configuration file" filename=prometheus.yml
ts=2022-02-01T00:39:46.026Z caller=main.go:1033 level=info msg="Completed loading of configuration file" filename=prometheus.yml totalDuration=28.923843ms db_storage=797ns remote_storage=1.28µs web_handler=277ns query_engine=837ns scrape=28.675306ms scrape_sd=24.431µs notify=14.701µs notify_sd=9.097µs rules=1.221µs

What if we run Prometheus in Kubernetes? Let’s see in the following sections!

3.3. Reload Prometheus Configuration manually in Kubernetes

1. Deploy Prometheus in Kubernetes

kubectl create namespace monitoringkubectl apply -k

(If you deployed Prometheus Operator in the previous section, please delete the resources and operator with the command before running the command above

kubectl apply -k
kubectl create -f


2. Reload Prometheus Configuration

It’s not convenient to send the SIGHUP signal in the container. So we use the other option:

sending a HTTP POST request to the /-/reload endpoint (when the --web.enable-lifecycle flag is enabled)

To access the endpoint from outside the cluster (especially when testing in your local), we can use port-forward:

kubectl -n monitoring port-forward svc/prometheus 9090:9090
Forwarding from -> 9090
Forwarding from [::1]:9090 -> 9090

Send a POST request:

curl -X POST http://localhost:9090/-/reload

You can check the Prometheus’ container logs:

kubectl logs prometheus-0 -n monitoring -f...ts=2022-02-01T00:44:46.538Z caller=main.go:1128 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
ts=2022-02-01T00:44:46.542Z caller=main.go:1165 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=3.510052ms db_storage=10.127µs remote_storage=16.673µs web_handler=9.594µs query_engine=16.019µs scrape=720.735µs scrape_sd=177.539µs notify=1.129µs notify_sd=1.698µs rules=16.595µs

Prometheus successfully reloaded the configuration file.

However, it’s annoying to send a POST request every time we update the configuration. We also might forget to trigger reloading after updating the configuration file.

3.4. Restart Prometheus when updating configuration in Kubernetes

Another solution to just automate the process of reloading the configuration is to utilize the Kubernetes feature of rolling update triggered by changing StatefulSet spec.

Strictly speaking, this is NOT reloading the configuration supported by Prometheus anymore, but automating restarting the Prometheus Pods, which loads the latest configuration, when the configuration is updated.

1. Deploy Prometheus with Kustomize.

To trigger a rolling update, we need to change something under the spec of StatefuSet. If we just change the contents of ConfigMap that is referenced by the StatefulSet, the rolling update will not be triggered. So one way to do that is to change ConfigMap’s name and the reference from the StatefulSet.

We can use Kustomize’s configMapGenerator to achieve this.

kustomization.yaml looks like this:

namespace: monitoring
- name: prometheus-config
- prometheus.yml
- namespace.yaml
- serviceaccount.yaml
- clusterrole.yaml
- clusterrolebinding.yaml
- statefulset.yaml
- service.yaml

You can check all the files here:

You need to delete the two lines from kustomization.yaml

disableNameSuffixHash: true

(To update the kustomization.yaml, need to download the files and update in local)

Now we apply it with -k option:

kubectl apply -k .

2. Change Prometheus configuration

Change Prometheus configuration file prometheus.yml.

Apply the change:

kubectl apply -k .

The StatefulSet is rolled out and a new Pod is created with the new configuration.

In this way, we can automate the reloading process forcefully. If you don’t mind restarting the Prometheus server and really want to reduce the manual step to reload the configuration, this solution would be beneficial.

3. Clean up

kubectl delete -k .

Most readers might not be satisfied with the solution above as we forcefully restart the process not using the supported reloading feature. Triggering rolling update when changing the ConfigMap in the Kubernetes layer is the same as killing the process and restarting Prometheus manually in local, which is not beautiful. How Prometheus Operator reloads the config when PodMonitor or ServiceMonitor is updated?

3.4. Reload Prometheus Configuration automatically with Prometheus Operator in Kubernetes

1. Deploy Prometheus Operator

Deploy Prometheus operator in the same way as we did:

kubectl create -f

2. Create Prometheus

kubectl apply -k

3. Check the configuration on Prometheus UI

With port-forwarding, we can access Prometheus UI on http://localhost:9090.

kubectl -n monitoring port-forward svc/prometheus-operated 9090:9090

When you change ServiceMonitor, the Prometheus configuration is also changed, so Prometheus needs to reload the configuration.

Prometheus Pods created by Prometheus Operator has two containers; prometheus and prometheus-config-reloader.

kubectl get po -n monitoring
prometheus-prometheus-0 2/2 Running 0 105m
prometheus-prometheus-1 2/2 Running 0 105m

prometheus-config-reloader watches the configuration and triggers reloading by sending an HTTP POST request to the promtheus container. Interestingly, the prometheus-config-reloader uses reloader package under Thanos.

This is how the configuration is reloaded automatically in the Prometheus Pod created by Prometheus Operator.

The overview of how Prometheus Operator reflects the configuration changes:

4. Summary

We have checked how to update the Prometheus configuration in different ways. and we also studied how to let the Prometheus server reload the new configuration with or without Prometheus Operator.

Prometheus Operator can facilitate the Prometheus configuration and reloading process with the Custom Resources and a dedicated container.

  1. ServiceMonitor and PodMonitor facilitate updating scrape config.
  2. prometheus-config-reloader automates reloading the Prometheus configuration.



Masato Naka

An SRE engineer, mainly working on Kubernetes. CKA (Feb 2021). His Interests include Cloud-Native application development, and machine learning.