-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trusted Pods grant application #1953
Conversation
CLA Assistant Lite bot All contributors have signed the CLA ✍️ ✅ |
I have read and hereby sign the Contributor License Agreement. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the application, @bojidar-bg.
Could you elaborate what you mean by this project being "Apocryph-inspired"? As you state yourself, my main concern right now would be getting a critical mass of users so that the platform is worthwhile for both sides. Many of the projects involving TEEs which we have supported are struggling with this. Also, could you expand on the libp2p protocols? The interfaces for AWS EC2 are already rather complex, doing something similar with an undefined number of providers (each with their own policies and payment details) in a decentralised manner seems... ambitious.
Lastly, you say that Trusted Pods would be "layered on top [...] either as a parachain, or as part of one or more parachains." Other than the two smart contracts you listed in M4, there are no other (Polkadot) node dependencies, correct? And if you mean the contracts to be deployed on multiple parachains, what would that look like to the user and provider? Sounds like the contracts would have to be "in sync".
All good questions, @semuelle! I will get around and discuss them with the team, but some preliminary answers going off what I know: The project is "Apocryph-inspired" in the sense that we took inspiration for the idea from Apocryph, a past project we worked on but never quite managed to push to production—basically, said project needed to have a runtime component executing all the user-submitted code, and Trusted Pods is, in a way, an evolution of the idea for that runtime. (: For libp2p protocols, I'm currently envisioning something relatively simple, such as.. the provider listing their libp2p address in the registry, and then the publisher opening a connection with that address, forwarding the relevant part of the pod configuration which pertains to resource usage, then the provider ok-ing that, and finally the publisher giving the provider the address of the payment contract and the encrypted pod manifest. It is likely it would become a bit more complicated than just this as we explore the caveats of what's needed, but I do think that would be the general gist of it. The interfaces for EC2 are not the model we were intent on following for pod manifests (hoping I got that part of your question correctly). Instead, we were thinking of the Kuberenetes PodSpec (with inspirations from the RunPod recipe configuration or I guess the Docker Compose service configuration), so, something more akin to setting up a container image, ports/connections required, and resource quotas, and not akin to setting up a virtual machine. You are correct about there being nothing blockchain node -dependent apart from those two smart contracts. I would imagine running them on a single parachain, but since the two smartcontracts are both self-contained and do not communicate with each other, I wouldn't see a problem if, one day, we have a registry contract on one parachain and a payment contract on another. However, for the purposes of the grant as currently described, both contracts would likely be on a single parachain, possibly even just on a testnet parachain to get things off the ground. (: I will have to discuss the critical mass issue further with the team. Feel free to elaborate on what you generally want to see in regards to solving that issue—I'm guessing just mentioning that the issue exists is not cutting it, 😂, but then the FAQ says that marketing and outreach are generally not the kind of activities you fund, so... I'm guessing some kind of more developed plan for reaching critical mass, or...? |
Spot on question @semuelle! We have been analyzing the TEE adoption aspect quite a lot and we believe that the landscape is now much more favorable for such projects. Today, there is a lot more technology marketing done by the big cloud providers on the confidential computing frontier, which has increased the overall awareness in the tech world. Also we have done a couple of product / architecture decisions to place Trusted Pods in the right position in the ecosystem to accelerate adoption. Our thinking, so far can be summarized in the following way:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the follow-ups, @bojidar-bg and @branimirangelov, and sorry for the late reply. We have a bit of a backlog currently.
The interfaces for EC2 are not the model we were intent on following for pod manifests
Sorry, I could have worded that better. I just wanted to point out that using EC2 is already quite complex, and you are adding at least one layer of complexity.
Feel free to elaborate on what you generally want to see in regards to solving that issue
This is something that would fit well into the Future Plans
section of the application. Having said that, I don't have a solution to it. I know that this is a lot to ask for an early-stage project, but often teams are too tech-minded to see past the next milestone and it helps to see that teams are aware of what pitfalls might await them and that they might have a plan of attack.
- TEE is becoming the norm in the current (latest) generation of server CPUs, so effectively evey new server hardware equipment is TEE capable.
This is the first time I am hearing this. Do you have a source for that?
In any case, I am going to mark your application as ready for review so that other committee members can already chime in.
Thank you @semuelle, we appreciate your time and we see the long backlog indeed, which is great for you! When addressing the initial topic concerning EC2, it's important to clarify that our intention is not to enhance the EC2 layer, but rather to operate at an entirely different service level. EC2 operates within the Infrastructure as a Service (IaaS) paradigm, whereas our model is firmly rooted in the Platform as a Service (PaaS) layer. To draw a comparison to EC2, I would liken our model more to services like AWS Lambda [1] or Heroku [2] (which, internally, leverages AWS EC2). In contrast to EC2 [3], our approach is notably simpler, as supported by references to the AWS cost calculator. Another crucial distinction in our proposed approach is our current lack of plans to implement load balancing or high availability across different providers, at least not as part of the grant. While this approach is undeniably easier to implement, the primary driver behind this direction is our specific target audience: small-to-medium hosting providers who rely on hyperconverged infrastructure or lease infrastructure from large, hyperscale providers. Initially, we are not optimizing for home users. Instead, our goal is to empower hosting providers with a level of flexibility that enables them to optimize their infrastructure costs (similar to managing EC2 costs) while delivering the essential platform services to developers. This level of flexibility empowers providers to innovate at the infrastructure layer, facilitating the discovery of more efficient ways to deliver their services. In the discussion on the Trusted Execution Environment (TEE), it's crucial to reiterate that our intended provider audience does not encompass home users. When examining the TEE landscape, our focus centers on server-grade CPUs. While numerous significant challenges have arisen in the context of home and mobile users, TEE adoption takes a distinctly different and consistent trajectory in the realm of server-grade CPUs. Specifically, we are referring to processors such as Intel's Xeon within Intel's product portfolio. Our information is primarily derived from our own analysis of product specifications from the most prominent server CPU vendors in the hyperconverged infrastructure market. It's important to note that major cloud players rely heavily on hyperscale infrastructure, which adds complexity to the market landscape. Nevertheless, Intel and AMD continue to hold significant market shares there as well. For both Intel [4] and AMD [5], their latest generations of server-grade CPUs (Xeon and EPYC) come equipped with built-in TEE capabilities. Hyperconverged infrastructure equipment vendors, such as Dell, HP, Supermicro, and other OEMs, primarily source their CPUs from Intel and AMD. To illustrate, examining the SGX support in Xeon [6] reveals that all variants offer SGX support, albeit with varying Enclave Page Cache (EPC) sizes. The increased adoption of TEE in server CPUs is primarily driven by the stringent security requirements set forth by national and industry-based security standards. Additionally, there is a growing preference in the enterprise world for confidential and sovereign cloud architectures. This confluence of factors has propelled the integration of TEE features into server-grade CPUs, ensuring heightened security controls and compliance with evolving security standards. References: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the application. I have a few comments/questions:
- Only your last milestone currently seems to be related to our ecosystem (ink! integration). Is this correct?
- Are you aware of https://acurast.com/? or Integritee? It might make sense to leverage their tech stack or even work together with them.
- Could you add the programming language to the specification of each deliverable?
@Noc2 Thanks for taking the time to give our application a look (:
|
519fa29
to
12769ee
Compare
Thanks for the application @bojidar-bg it seems like a really cool idea, but I agree with my colleagues that milestone 1 isn't very specific the substrate ecosystem, and afaik we usually typically don't pay for the spec parts. Personally, I might be more apt to approve a reduced level 2 PoC of the registry and payment contracts, but I understand that you might not be able to fund the other parts on your own. |
Hey @keeganquigley! That's a good point. Personally, I think that projects like Trusted Pods that provide infrastructure for running developer-submitted code can't quite succeed without sufficient documentation of the functions and formats exposed... but I can understand that paying for such specification out of the grant is not quite aligned with the goals of the fund. That being said, we are going to be approaching investors as well and are committed enough to the project to self-fund parts of it, so we won't necessarily opposed to trim the scope of the grant somewhat. So, I think we could cut out the "boilerplate" first milestone (and later specification deliveries) and deliver just the TEE-related and smart-contract-related functionality as part of the grant—which is, practically speaking, the main aim of the Trusted Pods, to have a blockchain-backed marketplace where one can rent out TEE-secured compute resources. How does that sound? |
Thanks for the update @bojidar-bg sure, I can't speak for others of course, but that sounds good to me. I'd be willing to support a reduced scope of just the TEE-related and smart-contract-related functionalities. |
12769ee
to
7ba5c07
Compare
Alright, @keeganquigley! I have updated the application to remove the first milestone and reordered the other two. Hopefully it is more in line with what Web3 might be willing to support now (: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the changes @bojidar-bg this looks interesting, and also based on your previous work, I'm happy to go ahead with it.
Hi @bojidar-bg, thanks again for the application. After discussion, it has become clear that the application will not find the necessary support in the committee. The main point of concern was the long-term viability of the project. I wish I had better news, but I want to point out that we are very grateful for the effort you put into the application and the friendly and thorough responses in our discussion. I hope that you will find a way to continue with the project. In any case, best of luck and feel free to apply again in the future. |
Welp... 😅 |
Project Abstract
Trusted Pods is a decentralized compute marketplace where developers can run container pods securely and confidentially through small and medium cloud providers. It is tailored towards providing a one-stop solution for finding infrastructure to run pods in a cost efficient, serverless manner—though, of course, developing it all the way to production is outside the scope of this grant application.
Within the Polkadot ecosystem, Trusted Pods would be an application/dApp which makes use of ink! smart contracts and IPFS to allow for application publishers to deploy pods to compute service providers and pay for the compute services used. What makes Trusted Pods "trusted" is the use of Trusted Execution Environment (TEE) enclaves, a feature of modern CPUs that enables for the execution of software in a way that does not allow for tampering by the host operating system or administrator.
Grant level
Application Checklist
trusted-pods.md
).@_______:matrix.org
(change the homeserver if you use a different one)