最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

kubernetes - Using Promethus adapter as custom metrics server for HPA autoscaling - Stack Overflow

programmeradmin4浏览0评论

I am trying to setup and use the Prometheus server and Prometheus adapter integration to replace the metrics-server in the local kubernetes cluster (built using kind) and use it to scale my HPA based on custom metrics.

I have 2 Promethus pod instances and 1 prometheus adapter deployed and running in the 'monitoring' namespace.

The Spring boot application deployment (to be scaled by HPA) is deployed and running in 'demo-config-app' namespace.

Problem: HPA (Horizontal Pod Autoscaler) is simply not able to fetch metrics from prometheus adapter which I intent to use as a replacement for K8S metrics-server.

Custom metrics query configured an Prometheus adapter ConfigMap is,

rules:
    - seriesQuery: 'http_server_requests_seconds_count{namespace!="", service != "", uri = "/"}'
      resources: 
        overrides:
          namespace: {resource: "namespace"}
          service: {resource: "service"}
      name:
        matches: "http_server_requests_seconds_count"
        as: "http_server_requests_seconds_count"
      metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>,uri!~"/actuator/.*"}[15m]))

HPA Yaml manifest is as follows :

kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2
metadata:
  name: demo-config-app
  namespace: dynamic-secrets-ns
spec:
  scaleTargetRef:
    # point the HPA at the sample application
    # you created above
    apiVersion: apps/v1
    kind: Deployment
    name: demo-config-watcher
  # autoscale between 1 and 10 replicas
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Object
    object:
      metric:
        name: http_server_requests_seconds_count
      describedObject:
        apiVersion: v1
        kind: Service
        name: demo-config-watcher-svc-internal
      target:
        type: AverageValue
        averageValue: 10

Custom metrics, seems to have been correctly configured. Executing the kubectl command,

    $ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta2" | jq
    
    OUTPUT:
        {
          "kind": "APIResourceList",
          "apiVersion": "v1",
          "groupVersion": "custom.metrics.k8s.io/v1beta2",
          "resources": [
            {
              "name": "namespaces/http_server_requests_seconds_count",
              "singularName": "",
              "namespaced": false,
              "kind": "MetricValueList",
              "verbs": [
                "get"
              ]
            },
            {
              "name": "services/http_server_requests_seconds_count",
              "singularName": "",
              "namespaced": true,
              "kind": "MetricValueList",
              "verbs": [
                "get"
              ]
            }
          ]
        }
        

Also When I execute the metrics query in prometheus console,

    sum(rate(http_server_requests_seconds_count{namespace="dynamic-secrets-ns",service="demo-config-watcher-svc-internal",uri!~"/actuator/.*"}[15m]))

I get an aggregated response value - 3.1471300541724765

Following are the few points from my analysis of adapter logs :

  1. As soon as, Promethus adapter pod starts-up, it fires the following query,
:9090/api/v1/series?match%5B%5D=http_server_requests_seconds_count%7Bnamespace%21%3D%22%22%2C+service+%21%3D+%22%22%2C+uri+%3D+%22%2F%22%7D&start=1742277149.166

I tried executing the same query from an nginx pod in the same namespace as that of prometheus-adater (with the same ServiceAccount) and it gives me the following results:

{
   "status":"success",
   "data":[
      {
         "__name__":"http_server_requests_seconds_count",
         "container":"demo-config-watcher",
         "endpoint":"http-internal",
         "error":"none",
         "exception":"none",
         "instance":"10.244.2.104:8080",
         "job":"demo-config-watcher-job",
         "method":"GET",
         "namespace":"dynamic-secrets-ns",
         "outcome":"SUCCESS",
         "pod":"demo-config-watcher-7dbb9b598b-k7cgj",
         "service":"demo-config-watcher-svc-internal",
         "status":"200",
         "uri":"/"
      }
   ]
}    

  1. By increasing the verbosity of prometheus adapter logs, I can see following requests being repeatedly appearing in the log. Not sure about the first GET request, where it is coming from. The second request is clearly coming from HPA controller and it results in HTTP status 404. Not sure why ?
I0318 06:31:39.832124       1 round_trippers.go:553] POST :443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s 201 Created in 1 milliseconds
I0318 06:31:39.832343       1 handler.go:143] prometheus-metrics-adapter: GET "/apis/custom.metrics.k8s.io/v1beta2/namespaces/dynamic-secrets-ns/services/demo-config-watcher-svc-internal/http_server_requests_seconds_count" satisfied by gorestful with webservice /apis/custom.metrics.k8s.io
I0318 06:31:39.833331       1 api.go:88] GET :9090/api/v1/query?query=sum%28rate%28http_server_requests_seconds_count%7Bnamespace%3D%22dynamic-secrets-ns%22%2Cservice%3D%22demo-config-watcher-svc-internal%22%2Curi%21~%22%2Factuator%2F.%2A%22%7D%5B15m%5D%29%29&time=1742279499.832&timeout= 200 OK
E0318 06:31:39.833494       1 provider.go:186] None of the results returned by when fetching metric services/http_server_requests_seconds_count(namespaced) for "dynamic-secrets-ns/demo-config-watcher-svc-internal" matched the resource name
I0318 06:31:39.833600       1 httplog.go:132] "HTTP" verb="GET" URI="/apis/custom.metrics.k8s.io/v1beta2/namespaces/dynamic-secrets-ns/services/demo-config-watcher-svc-internal/http_server_requests_seconds_count" latency="2.926569ms" userAgent="kube-controller-manager/v1.32.0 (linux/amd64) kubernetes/70d3cc9/system:serviceaccount:kube-system:horizontal-pod-autoscaler" audit-ID="8f71b62a-92bc-4f13-a409-01ec5b778429" srcIP="172.18.0.3:34574" resp=404

HPA has following RBAC permissions configured,

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  creationTimestamp: "2025-03-16T05:47:45Z"
  name: custom-metrics-getter
  resourceVersion: "6381614"
  uid: 04106c39-be1f-4ee3-b2ab-cf863ef43aca
rules:
- apiGroups:
  - custom.metrics.k8s.io
  resources:
  - '*'
  verbs:
  - '*'

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"hpa-custom-metrics-getter"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"custom-metrics-getter"},"subjects":[{"kind":"ServiceAccount","name":"horizontal-pod-autoscaler","namespace":"kube-system"}]}
  creationTimestamp: "2025-03-16T05:47:45Z"
  name: hpa-custom-metrics-getter
  resourceVersion: "6381615"
  uid: c819798d-fdd0-47df-a8d1-55cff8101d84
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: custom-metrics-getter
subjects:
- kind: ServiceAccount
  name: horizontal-pod-autoscaler
  namespace: kube-system

Appreciate any help on how to take this forward, thanks in advance.

I am trying to setup and use the Prometheus server and Prometheus adapter integration to replace the metrics-server in the local kubernetes cluster (built using kind) and use it to scale my HPA based on custom metrics.

I have 2 Promethus pod instances and 1 prometheus adapter deployed and running in the 'monitoring' namespace.

The Spring boot application deployment (to be scaled by HPA) is deployed and running in 'demo-config-app' namespace.

Problem: HPA (Horizontal Pod Autoscaler) is simply not able to fetch metrics from prometheus adapter which I intent to use as a replacement for K8S metrics-server.

Custom metrics query configured an Prometheus adapter ConfigMap is,

rules:
    - seriesQuery: 'http_server_requests_seconds_count{namespace!="", service != "", uri = "/"}'
      resources: 
        overrides:
          namespace: {resource: "namespace"}
          service: {resource: "service"}
      name:
        matches: "http_server_requests_seconds_count"
        as: "http_server_requests_seconds_count"
      metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>,uri!~"/actuator/.*"}[15m]))

HPA Yaml manifest is as follows :

kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2
metadata:
  name: demo-config-app
  namespace: dynamic-secrets-ns
spec:
  scaleTargetRef:
    # point the HPA at the sample application
    # you created above
    apiVersion: apps/v1
    kind: Deployment
    name: demo-config-watcher
  # autoscale between 1 and 10 replicas
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Object
    object:
      metric:
        name: http_server_requests_seconds_count
      describedObject:
        apiVersion: v1
        kind: Service
        name: demo-config-watcher-svc-internal
      target:
        type: AverageValue
        averageValue: 10

Custom metrics, seems to have been correctly configured. Executing the kubectl command,

    $ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta2" | jq
    
    OUTPUT:
        {
          "kind": "APIResourceList",
          "apiVersion": "v1",
          "groupVersion": "custom.metrics.k8s.io/v1beta2",
          "resources": [
            {
              "name": "namespaces/http_server_requests_seconds_count",
              "singularName": "",
              "namespaced": false,
              "kind": "MetricValueList",
              "verbs": [
                "get"
              ]
            },
            {
              "name": "services/http_server_requests_seconds_count",
              "singularName": "",
              "namespaced": true,
              "kind": "MetricValueList",
              "verbs": [
                "get"
              ]
            }
          ]
        }
        

Also When I execute the metrics query in prometheus console,

    sum(rate(http_server_requests_seconds_count{namespace="dynamic-secrets-ns",service="demo-config-watcher-svc-internal",uri!~"/actuator/.*"}[15m]))

I get an aggregated response value - 3.1471300541724765

Following are the few points from my analysis of adapter logs :

  1. As soon as, Promethus adapter pod starts-up, it fires the following query,
http://prometheus-k8s.monitoring.svc:9090/api/v1/series?match%5B%5D=http_server_requests_seconds_count%7Bnamespace%21%3D%22%22%2C+service+%21%3D+%22%22%2C+uri+%3D+%22%2F%22%7D&start=1742277149.166

I tried executing the same query from an nginx pod in the same namespace as that of prometheus-adater (with the same ServiceAccount) and it gives me the following results:

{
   "status":"success",
   "data":[
      {
         "__name__":"http_server_requests_seconds_count",
         "container":"demo-config-watcher",
         "endpoint":"http-internal",
         "error":"none",
         "exception":"none",
         "instance":"10.244.2.104:8080",
         "job":"demo-config-watcher-job",
         "method":"GET",
         "namespace":"dynamic-secrets-ns",
         "outcome":"SUCCESS",
         "pod":"demo-config-watcher-7dbb9b598b-k7cgj",
         "service":"demo-config-watcher-svc-internal",
         "status":"200",
         "uri":"/"
      }
   ]
}    

  1. By increasing the verbosity of prometheus adapter logs, I can see following requests being repeatedly appearing in the log. Not sure about the first GET request, where it is coming from. The second request is clearly coming from HPA controller and it results in HTTP status 404. Not sure why ?
I0318 06:31:39.832124       1 round_trippers.go:553] POST https://10.96.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s 201 Created in 1 milliseconds
I0318 06:31:39.832343       1 handler.go:143] prometheus-metrics-adapter: GET "/apis/custom.metrics.k8s.io/v1beta2/namespaces/dynamic-secrets-ns/services/demo-config-watcher-svc-internal/http_server_requests_seconds_count" satisfied by gorestful with webservice /apis/custom.metrics.k8s.io
I0318 06:31:39.833331       1 api.go:88] GET http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum%28rate%28http_server_requests_seconds_count%7Bnamespace%3D%22dynamic-secrets-ns%22%2Cservice%3D%22demo-config-watcher-svc-internal%22%2Curi%21~%22%2Factuator%2F.%2A%22%7D%5B15m%5D%29%29&time=1742279499.832&timeout= 200 OK
E0318 06:31:39.833494       1 provider.go:186] None of the results returned by when fetching metric services/http_server_requests_seconds_count(namespaced) for "dynamic-secrets-ns/demo-config-watcher-svc-internal" matched the resource name
I0318 06:31:39.833600       1 httplog.go:132] "HTTP" verb="GET" URI="/apis/custom.metrics.k8s.io/v1beta2/namespaces/dynamic-secrets-ns/services/demo-config-watcher-svc-internal/http_server_requests_seconds_count" latency="2.926569ms" userAgent="kube-controller-manager/v1.32.0 (linux/amd64) kubernetes/70d3cc9/system:serviceaccount:kube-system:horizontal-pod-autoscaler" audit-ID="8f71b62a-92bc-4f13-a409-01ec5b778429" srcIP="172.18.0.3:34574" resp=404

HPA has following RBAC permissions configured,

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  creationTimestamp: "2025-03-16T05:47:45Z"
  name: custom-metrics-getter
  resourceVersion: "6381614"
  uid: 04106c39-be1f-4ee3-b2ab-cf863ef43aca
rules:
- apiGroups:
  - custom.metrics.k8s.io
  resources:
  - '*'
  verbs:
  - '*'

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"hpa-custom-metrics-getter"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"custom-metrics-getter"},"subjects":[{"kind":"ServiceAccount","name":"horizontal-pod-autoscaler","namespace":"kube-system"}]}
  creationTimestamp: "2025-03-16T05:47:45Z"
  name: hpa-custom-metrics-getter
  resourceVersion: "6381615"
  uid: c819798d-fdd0-47df-a8d1-55cff8101d84
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: custom-metrics-getter
subjects:
- kind: ServiceAccount
  name: horizontal-pod-autoscaler
  namespace: kube-system

Appreciate any help on how to take this forward, thanks in advance.

Share Improve this question edited Mar 18 at 7:32 Mandar K asked Mar 18 at 7:23 Mandar KMandar K 4156 silver badges22 bronze badges 1
  • Refer to this prometheus-adapter and this Walkthrough which helps you in resolving the issue. If not, I am happy to assist further. – Hemanth Kanchumurthy Commented Mar 18 at 9:41
Add a comment  | 

1 Answer 1

Reset to default 1

Finally, The problem was with the metricsQuery configured in the adapter config.

 rules:
    - seriesQuery: 'http_server_requests_seconds_count{namespace!="", pod != ""}'
      resources: 
        overrides:
          namespace: {resource: "namespace"}
          pod: {resource: "pod"}
      name:
        matches: "^(.*)_seconds_count"
        as: "${1}_per_second"
      metricsQuery: 'sum(rate(<<.Series>>{<<.LabelMatchers>>,uri!~"/actuator/.*"}[2m])) by (pod)'

HPA:

---
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2
metadata:
  name: demo-http
  namespace: dynamic-secrets-ns
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: demo-config-watcher
  minReplicas: 1
  maxReplicas: 10
  metrics:
  # use a "Pods" metric, which takes the average of the
  # given metric across all pods controlled by the autoscaling target
  - type: Pods
    pods:
      metric:
        # use the metric that you used above: pods/http_requests
        name: http_server_requests_per_second
      target:
       # We configured the HPA to scale Pods if the average of requests is greater than 10 per seconds.
        type: AverageValue
        averageValue: 10000m

Huge shoutout for a youtube video - Anton's guide on K8S-Prometheus integration

发布评论

评论列表(0)

  1. 暂无评论