I have kubernetes configuration file set up to access multiple clusters which looks like the following with clusters
, users
and contexts
sections for each cluster.
myk8sconfig.yaml
apiVersion: ""
kind: ""
clusters:
- name: mycluster
cluster:
server: https://<mycluster IP>:443
certificate-authority-data: <my cluster certificate>
--------------------------------------</snip>------------------------------------------
users:
- name: mycluster-user
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: ./mycustom_token_generator_cmd
args:
- token_gen_args
env: []
--------------------------------------</snip>------------------------------------------
contexts:
- name: mycluster-context
context:
cluster: mycluster
user: mycluster-user
I can use the above config to run any kubectl
command as the following without any errors
kubectl get pod --all-namespaces -o json \
--kubeconfig ~/.kube/myk8sconfig.yaml \
--context mycluster-context
But the same configuration file fails to get authorization when used in python sdk like the following
from kubernetes import client, config
config.load_kube_config(
config_file=HOME_DIR + "/.kube/myk8sconfig.yaml",
context="mycluster-context"
)
config.debug = True
v1=client.CoreV1Api()
ret = v1.list_pod_for_all_namespaces(watch=False)
It gives me the following error
Reason: Unauthorized
HTTP response headers: HTTPHeaderDict({'Audit-Id': '172c4e92-7e7a-45a1-blah-blah', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Fri, 07 Feb 2025 18:19:28 GMT', 'Content-Length': '129'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
As you can see I tried config.debug = True
but that didn't yield any additional information.
I am not sure how to debug this further and need some help.
Thank you in advance for your help.