Skip to content
This repository has been archived by the owner on Sep 30, 2020. It is now read-only.

Enable some kind of EC2 metadata proxy(like kube2iam) by default #919

Closed
mumoshu opened this issue Sep 4, 2017 · 11 comments
Closed

Enable some kind of EC2 metadata proxy(like kube2iam) by default #919

mumoshu opened this issue Sep 4, 2017 · 11 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Milestone

Comments

@mumoshu
Copy link
Contributor

mumoshu commented Sep 4, 2017

I believe security as one of the kube-aws goals can be improved further by introducing kube2iam or a comparable solution.

AFAICS, many cloud-providers has metadata service like EC2's which provides cloudprovider credentials via 169.254.169.254.
One of risks implied by this is that the service endpoint can be exploited by a malicious user to obtain the credentials from a vulnerable container attacked by the malicious user.

To minimize the risk while retaining the usability of the resulting k8s cluster, deploying a metadata proxy like kube2iam seems to be the way to go.

A relevant reading: https://news.ycombinator.com/item?id=12670316

Alternatively, I considered to drop all the packets from docker0 to the metadata service. However, it obviously blocks all the useful apps like (for example kube-resources-autosave from kube-aws) depends on AWS credentials fetched from the metadata service. You can pass credentials via well known envvars that introduces an another question e.g. it is ok for you to persisnt aws credentials in k8s secrets, how to rotate them, etc.

Although I'd like to enable it by default, I'd rather like to allow disabling it via cluster.yaml, so that one can incorporate their own EC2 metadata proxy solution instead, or even decide to go without one understanding trade-offs.

There seems like several possible improvements could be done on kube2iam side before making it generally recommended for everyone. I'll add links to corresponding issues for those later.

@mumoshu
Copy link
Contributor Author

mumoshu commented Nov 6, 2017

Once the use-case of #891 is supported, I'm even looking forward to enable kube2iam by default starting v0.9.10-rc.1 :)

@camilb @danielfm @c-knowles @redbaron WDYT? Would the change seem too drastic?

@mumoshu
Copy link
Contributor Author

mumoshu commented Nov 6, 2017

Related reading: Hacking and Hardening Kubernetes By Example, especially:

Filter access to the cloud provider metadata APIs/URL, and Limit IAM permissions

@mumoshu mumoshu changed the title Enable kube2iam by default Enable some kind of EC2 metadata proxy(like kube2iam) by default Nov 6, 2017
@mumoshu
Copy link
Contributor Author

mumoshu commented Nov 6, 2017

Today, there is kiam which is an alternative to kube2iam with a great improvement regarding security - the server/agent architecture to limit nodes with assume-role permissions to only masters.

Also see jtblin/kube2iam#8 for more context on why server/agent architecture would be better.

@cknowles
Copy link
Contributor

cknowles commented Nov 7, 2017

kiam seems decent although not tried it. I'm in favour of enabling one of these by default. Currently kube2iam has a few problems in the clusters I'm running, I believe those to roughly equate with the problems kiam sets out to solve.

@cknowles
Copy link
Contributor

Does anyone have experience of kiam versus kube2iam?

@pingles
Copy link

pingles commented Jan 29, 2018

@c-knowles just found this referencing kiam (which I've worked on). Not sure if this is still useful but just in case.

We ran kube2iam early on but ran into data races issuing credentials to the wrong pods. I'm pretty certain these have been fixed through greater use of the k8s client-go lib though. In the end we also changed a few other things to improve performance+security on our clusters (separated agents/masters, prefetching of credentials).

I'm on the Kubernetes Slack so feel free to message me there. Alternatively you can also email me direct (https://github.com/pingles has my email listed).

@mumoshu
Copy link
Contributor Author

mumoshu commented Feb 21, 2018

Note: I'm inclined to start by enabling kiam by default. Context: #1105, #1055 and the PR #1134

@mumoshu mumoshu added this to the v0.9.11 milestone Feb 21, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 22, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 22, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants