You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ratify has reached its first major release in over a year and has been widely adopted by users at scale. However, there are some know limitations in the current design and implementations of Ratify v1 that make it difficult to support more user scenarios/new features and enhance performance. These constraints make it challenging to attract more users and contributors. This discussion will compare the current architecture of Ratify v1 with the proposed v2 design to gather feedback and insights on the new approach.
Current design
The current architecture of Ratify v1:
From the above diagram, we can see a few limitations:
Ratify only supports CLI and K8s scenarios. But actually current Ratify is only well implemented/designed for K8s scenarios. There are some feature gaps between CLI and K8s implementations. To support more scenarios, like containerd plugins or docker plugins, or just a library for downstream services, we need to make Ratify to be more extensible.
Ratify was designed to validate security metadata of any given artifacts. But right now Ratify is mainly focusing on the K8s scenarios with images being verified.
The plugin framework was designed to separate built-in and external plugins. Built-in plugins run in the same process of Ratify while each external plugin would be executed in a new subprocess. Therefore, external plugins cannot share the in-momory cache with the main process, which may result to performance degradation , data inconsistency, race condition and security vulnerabilties.
The built-in plugins and authentication for different cloud providers are part of the main Ratify repository, which introduces a significant number of dependencies. This issue will become even more pronounced as additional cloud provider and new plugin implementation are added in the future.
Converting config.json from the CLI to Kubernetes CRs is not straightforward for users, increasing the learning cost for new users.
Proposed design
To address the above limitations, one possible v2 architecture design is as below:
%% Subgraph for inputs
graph TD
subgraph Inputs [Inputs]
direction TB
C[User Input per Request]
D["System Input (config/CRD)"]
end
%% Subgraph for core logic
subgraph Core [Core Logic]
direction TB
E[core]
end
%% Subgraph for middleware
subgraph Middleware [Middlewares]
B[Driver/Entrypoint]
F[Output Render]
end
A[Customized App]
%% Relationships
Inputs --> B
B --> Core
A -.- B
A --> Core
Core --> F
Loading
One example implementation would look like below. Notes: the text in red stands for a repository under Ratify org.
The key design principles are as follows:
Extract the Ratify core library(ratify-go) to focus solely on its primary functionality: validating the security metadata of artifacts in an efficient way. Consequently, the mutation API may be removed from the core library. The Ratify core will define required interfaces including Verifier, Store, and PolicyEnforcer but contains no implementation in the repo to minimize the dependencies in the core library.
To implement different plugins for each interface, we can create monorepo for each interface(e.g. ratify-verifier-go). Therefore the dependencies from plugins would just reside in plugin repos. And since store and policyEnforcer interfaces have limited implementations and much less dependencies, we can keep them in the ratify-go repo for now.
Different entrypoints may need a separate repo for each, e.g. ratify-cli is for CLI use case, ratify keeps serving for K8s and ratify-service will behave as a standalone service.
Each entrypoint repo will own the responsibility to select appropriate plugin implementations to build the image or binaries. Through injecting the dependency at build time, entrypoint repos will NOT introduce new dependencies from those plugin implementations. Additionally, plugins will run in the same process as Ratify main process to achieve the best performance and security.
The configuration CRD needs to be redesigned to allow seamless conversion to config.json for other use cases.
# Dockerfile# Use the official Golang image as the base imageFROM golang:1.21-alpine
# Set the working directory inside the containerWORKDIR /app
# Copy the Go modules filesCOPY go.mod ./
# Download the Go modules# RUN go mod download# Copy the source codeCOPY . .
# Add appropriate verifier/store implementationRUN sed -i '2i\import _ "github.com/binbin-li/ratify-test/cosign"' main.go && \
sed -i '2i\import _ "github.com/binbin-li/ratify-test/notation"' main.go && \
sed -i '2i\import _ "github.com/binbin-li/ratify-test/oras"' main.go
RUN go mod tidy
# Build the Go applicationRUN go build -o main .
# Command to run the executableCMD ["./main"]
In the above example, the only dependency of Ratify repo is the Ratify core library. And users could select appropriate interface implementations based on need.
Proposed Repo Layout
Keep the Ratify repo for K8s scenario as an external data provider for Gatekeeper.
ratify-go (serves as the Ratify core library)
ratify-verifier-go (monorepo for different implementations of verifiers)
ratify-cli (for CLI user scenario)
more repos for other user scenarios in the future.
Proposed milestones
Alpha.1: Ratify core library
Alpha.2: Create v2 branch in ratify repo, migrating the Executor to the new Ratify core.
Beta.1: Implement Oras store and Notation verifier
Beta.2: Improve performance, bug fix and add more features.
Oras store cache
Cosign verifier
RC.1: Add missing features from v1
GA
Anything else you would like to add?
No response
Are you willing to submit PRs to contribute to this feature?
Yes, I am willing to implement it.
The text was updated successfully, but these errors were encountered:
@binbin-li This new design looks promising to me as it decouples the Ratify core from a complex monolithic structure into multiple "microservice-like" components. It makes the Ratify core architecture lightweight and less dependencies, which is much easier for others to contribute or integrate with their systems.
Can you please reflect each repo in the new architecture diagram? It could add more clarity to Ratify new architecture and each's repo functionality. Thus I should be clear about where should I start in Ratify as a new developer/contributor.
What would you like to be added?
Background
Ratify has reached its first major release in over a year and has been widely adopted by users at scale. However, there are some know limitations in the current design and implementations of Ratify v1 that make it difficult to support more user scenarios/new features and enhance performance. These constraints make it challenging to attract more users and contributors. This discussion will compare the current architecture of Ratify v1 with the proposed v2 design to gather feedback and insights on the new approach.
Current design
The current architecture of Ratify v1:
From the above diagram, we can see a few limitations:
Proposed design
To address the above limitations, one possible v2 architecture design is as below:
One example implementation would look like below. Notes: the text in red stands for a repository under Ratify org.
The key design principles are as follows:
Verifier
,Store
, andPolicyEnforcer
but contains no implementation in the repo to minimize the dependencies in the core library.ratify-go
repo for now.Example of dependency injection at build time:
In the above example, the only dependency of Ratify repo is the Ratify core library. And users could select appropriate interface implementations based on need.
Proposed Repo Layout
Proposed milestones
Anything else you would like to add?
No response
Are you willing to submit PRs to contribute to this feature?
The text was updated successfully, but these errors were encountered: