-
Notifications
You must be signed in to change notification settings - Fork 295
Enable some kind of EC2 metadata proxy(like kube2iam) by default #919
Comments
Once the use-case of #891 is supported, I'm even looking forward to enable kube2iam by default starting v0.9.10-rc.1 :) @camilb @danielfm @c-knowles @redbaron WDYT? Would the change seem too drastic? |
Related reading: Hacking and Hardening Kubernetes By Example, especially:
|
Today, there is kiam which is an alternative to kube2iam with a great improvement regarding security - the server/agent architecture to limit nodes with assume-role permissions to only masters. Also see jtblin/kube2iam#8 for more context on why server/agent architecture would be better. |
kiam seems decent although not tried it. I'm in favour of enabling one of these by default. Currently kube2iam has a few problems in the clusters I'm running, I believe those to roughly equate with the problems kiam sets out to solve. |
Does anyone have experience of kiam versus kube2iam? |
@c-knowles just found this referencing kiam (which I've worked on). Not sure if this is still useful but just in case. We ran kube2iam early on but ran into data races issuing credentials to the wrong pods. I'm pretty certain these have been fixed through greater use of the k8s client-go lib though. In the end we also changed a few other things to improve performance+security on our clusters (separated agents/masters, prefetching of credentials). I'm on the Kubernetes Slack so feel free to message me there. Alternatively you can also email me direct (https://github.com/pingles has my email listed). |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I believe security as one of the kube-aws goals can be improved further by introducing kube2iam or a comparable solution.
AFAICS, many cloud-providers has metadata service like EC2's which provides cloudprovider credentials via 169.254.169.254.
One of risks implied by this is that the service endpoint can be exploited by a malicious user to obtain the credentials from a vulnerable container attacked by the malicious user.
To minimize the risk while retaining the usability of the resulting k8s cluster, deploying a metadata proxy like kube2iam seems to be the way to go.
A relevant reading: https://news.ycombinator.com/item?id=12670316
Alternatively, I considered to drop all the packets from docker0 to the metadata service. However, it obviously blocks all the useful apps like (for example
kube-resources-autosave
from kube-aws) depends on AWS credentials fetched from the metadata service. You can pass credentials via well known envvars that introduces an another question e.g. it is ok for you to persisnt aws credentials in k8s secrets, how to rotate them, etc.Although I'd like to enable it by default, I'd rather like to allow disabling it via cluster.yaml, so that one can incorporate their own EC2 metadata proxy solution instead, or even decide to go without one understanding trade-offs.
There seems like several possible improvements could be done on kube2iam side before making it generally recommended for everyone. I'll add links to corresponding issues for those later.
The text was updated successfully, but these errors were encountered: