-
Notifications
You must be signed in to change notification settings - Fork 752
Home
11/16 update: the guys from Platform 9 have doe something similar already, it is called fission: https://github.com/platform9/fission . Interestingly their python support uses the imp module to load function: https://github.com/platform9/fission/blob/master/environments/python3/server.py
I don’t know if they use thirdparty resources
Why: There is no clone of AWS lambda, and serverless is prone to replace full fledged server based infra and IT thinking (I mean think big ok…). This is particularly true in IoT (and bullshit as well…).
Why k8s: Because it is the container based platform of choice (be opinionated, you know...its trendy)
Start with implementation of API endpoints for functions. We can work on the event based system later.
**1-**Create single functions via k8s third party resources, manage them via kubectl. We get REST interface to the functions, all managed via k8s API Server.
Something like:
metadata:
name: func-tion.lambda.k8s.com
apiVersion: extensions/v1beta1
kind: ThirdPartyResource
description: "A lambda function
versions:
- name: v1
spec:
- runtime:
- memory:
- cpu:
- function:
And the function itself:
apiVersion: lambda.k8s.com/v1
kind: FuncTion
metadata:
name: crazy
labels:
lambda: endpoint
Spec:
runtime: python:2.7
function: |
def foobar():
return "hello world"
kubectl get functions
NAME
crazy
2-Run a controller in the k8s cluster (kinda like etcdclusters operators).
-It listens for new functions (API objects in the new resource group). -When there is a new function, it selects the runtime and creates a deployment and a service.
- The image used for the runtime contains an HTTP wrapper. -We inject the function in the HTTP server wrapper of the said runtime (via an arg, which is the function read from the thirdparty resource).
- Boom….
Note: to inject in Python we can use the imp module imp.load_source()
#!/usr/bin/env python
import sys
from bottle import route, run, template
with open(sys.argv[1], 'r') as txt:
fb = txt.read()
exec(fb)
@route('/hello')
def hello_handler():
return foobar()
run(host='localhost', port=8080)
Dockerfile of python runtime
FROM python:2.7.11-alpine
RUN apk add --no-cache python py-pip git
RUN pip install --upgrade pip
RUN pip install bottle
ADD kubeless.py /kubeless.py
CMD ["python", "/kubeless.py", "/tmp/foobar.txt"]
Run container:
docker run -p 80:80 -v $PWD:/tmp kubeless
Stick that in a ks8 deployment manifest, read the function from the thirdparty resource..figure out how to pass it as arg or volume..really don’t want to use configmaps, should just use the thirdparty resource manifest. Or read the 3rd function manifest, create a configmaps, and read the function from a the configmap volume mount.
3-Plan
Start from kubewatch core. Remove the SLACK stuff. Let it listen to third party resources. When new resource, create a deployment and service. (The logic should be well abstracted).
Plan for multiple runtime.
-Start with Go and Python (maybe javascript as well)
4-Check basic security issues with loading a random function in runtime
… 5-Run an horizontal pod autoscaler.
We plugin prometheus to monitor requests/second on the endpoints. If threshold is passed it triggers a scaling of the deployment.