Replies: 6 comments 16 replies
-
I like where this is going. I need a second read and some thinking before further commenting. Thanks for starting this! |
Beta Was this translation helpful? Give feedback.
-
Thanks for this proposal, I also did couple of reading passes and I hope I understood most of it correctly. I still have to do quite a bit of additional readup on overlayfs and the proposed runtime engine. Some of which whooshed right over my head. I think we had a discussion about a similar topic on the previous slack server; using perforce as a package repository and basically using it in the same way as the proposed artifact repository. I can't find the thread anymore unfortunately, but I think especially studios that do not or cannot rely on shared network drives would really benefit here, too. While this proposal with its examples probably mostly focuses on cloud based animation/VFX workflows, this would also really help game studios, as usually their data storage is not on a filesystem, but rather on perforce. This proposal would help to then also store packages on perforce and not have to rely on additional systems and hardware that have to be maintained. If I understood it correctly, another interesting factor for me would be the difference/similarity between cached packages and packages downloaded from the artifact repository. |
Beta Was this translation helpful? Give feedback.
-
Artifact Repository - could you clarify what contents are allowed within it?I have some general questions about the idea of an artifact repository and its potential for hosting common Rez packages which others have already made. The most common criticisms I see about Rez so far is that there's no central place to see what packages others have made (whether they be a package for From your description, I understood the artifact repository is a server which hosts installed Rez packages, no source/developer packages. So users would be able to see potentially versions of created Can we contextualize this discussion thread to REP-002: Recipe Repositories?In #673, there's a description of installing third-party packages and its dependencies locally, using a central "rez-recipes" git repository. While I realize REP-002 describes its work in an entirely "local-filesystem only" approach, I see some parallels in the "install a package and its dependencies from public rez package repositories." part of REP-002 and the "then a "localization" repo unzips them onto local disk on demand." portion of the artifact repository Re: Anchors
Just leaving here this would be a big win :) . Is it possible that the anchor / runtime engine can be built as a parallel effort with or without the artifact repository? Or are the two features inseparable? |
Beta Was this translation helpful? Give feedback.
-
I took some time to read your proposal. I think it really well summarizes the big picture of the multiple discussions we've (the community in general) been having in our different communication channels and it's also more or less what I had in mind. Here is some general notes/questions (in non specific other):
As for the runtime engine, I have a feeling that it's going to be the most contentious part... For example:
Someone might argue that if the runtime was a docker like solution, then shells might not necessarily make sense. Someone might want to have it's docker image completely baked (in the sense that when executing it, no scripts/entrypoint should be executed). |
Beta Was this translation helpful? Give feedback.
-
In the "Artifact Repository", you say:
I think it would simplify things if the concept of repository/store and the concept of plugin are broken in two parts. For me, artifact repository means a place where artifacts are stored. An artifact repository plugin is what would interface between rez and the runtime engine. In other words, saying "The artifact repository is able to take package definitions from a package repository" implies that the store itself would know about rez. I'm pretty sure it's not what you intended, but I might be wrong... Suggestion:
Also, which component would take care of "packaging" (make a zip, tar, etc) package payloads? Would it be the artifact repository plugins? |
Beta Was this translation helpful? Give feedback.
-
I am new to the concept of union mount systems like |
Beta Was this translation helpful? Give feedback.
-
Overview
This is a general discussion of the core modules of rez, how they interrelate with one another, and what might be improved.
Current System
Consider this diagram showing the current state of rez (we refer to core functionality only, for eg we don't touch on anything build related here):
Note that:
Here we see an overview of the components involved in generating a configured runtime in response to a user's request:
rez-env
, passing a list of package requestsResolvedContext
class).Rez aims to be an extensible packaging system. However from that POV, there are some problems with the current system:
filesystem
repo type, they are stored within the package repo)Proposed System
It should be possible to use rez to mix and match various different sources of package data, and to express those as a configured runtime in different ways. For example, it should be possible to:
A version of rez able to do these kinds of things might look like so:
As is done currently, the solver takes a user request, reads package definitions from a searchpath of package repositories, performs dependency resolution, and creates a context. However, we now introduce two new extensible objects to the system - an "artifact repository", and a "runtime engine".
Artifact Repository
The artifact repository is able to take package definitions from a package repository, and resolve them into some local consumable resource - often simply a directory on disk containing the package payload, for example. "Local" here is a little ambiguous, what I really mean is just a resource accessible to the user so that the runtime can be constructed. For example you may have an artifact repo that downloads package payloads from S3, and copies them into shared posix storage, where they're consumed by multiple users at your studio. A different artifact repo might unzip files from shared storage, into directories directly on users' local disk. It could even make sense to chain artifact repos together - perhaps your S3 repo downloads zipped artifacts to shared storage, then a "localization" repo unzips them onto local disk on demand.
It may often make sense to have different artifact repos associated with different package repos. For example, it's very convenient to install development packages to your standard
~/packages
package repo, and to store your package payloads there too; however you may want your released packages to have their definitions written to postgres, and their payloads to S3. Rez should be able to support these two pairs of package/artifact repos at the same time - it shouldn't matter where a package was sourced from in your runtime (clearly there are limits - you wouldn't be able to mix disk- and container- based payloads within the one runtime, for example).As a thought experiment, perhaps we might support defining pairs of package and artifact repos with a new syntax like so:
Runtime Engine
The runtime engine is responsible for taking the context, package and artifact repos, and a shell implementation, and using these to construct a configured runtime that's consumable by the user. Note that the shell is required, because part of the process of generating a runtime always involves taking packages' commands and converting them into equivalent shell code, which when sourced, configures the runtime.
Different implementations might include:
{root}
incommands
funcs) to the corresponding disk locations as resolved by the appropriate artifact repo.You can see how, due to decoupling of artifact repo and runtime engine, we might have an overlayfs-based runtime that's using packages that have been downloaded from S3, or were already available on local disk, or were some combination of the two.
One interesting detail is how package payloads might be combined/overlaid in runtimes which support that. An idea I've had for some time is a new "anchor" feature, where packages can define generic named "anchor points" within their payloads. Two packages with anchor points of the same name, will have that part of their payloads overlaid in the same location. So then it would be common for python packages to define a "python" anchor at
{root}/python
. Thus the various package commands likeenv.PYTHONPATH.append('{root}/python')
would collapse into a single path appended to PYTHONPATH. This approach would give us a way to express how packages should be combined in a generic way (not specific to a runtime engine implementation), whilst at the same time not imposing any kind of strict schema on a package's payload structure.Beta Was this translation helpful? Give feedback.
All reactions