Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ratify v2 architecture proposal #1942

Open
1 task done
binbin-li opened this issue Nov 20, 2024 · 1 comment · May be fixed by #1962
Open
1 task done

Ratify v2 architecture proposal #1942

binbin-li opened this issue Nov 20, 2024 · 1 comment · May be fixed by #1962
Labels
enhancement New feature or request triage Needs investigation

Comments

@binbin-li
Copy link
Collaborator

binbin-li commented Nov 20, 2024

What would you like to be added?

Background

Ratify has reached its first major release in over a year and has been widely adopted by users at scale. However, there are some know limitations in the current design and implementations of Ratify v1 that make it difficult to support more user scenarios/new features and enhance performance. These constraints make it challenging to attract more users and contributors. This discussion will compare the current architecture of Ratify v1 with the proposed v2 design to gather feedback and insights on the new approach.

Current design

The current architecture of Ratify v1:

Ratify v1 (6)

From the above diagram, we can see a few limitations:

  1. Ratify only supports CLI and K8s scenarios. But actually current Ratify is only well implemented/designed for K8s scenarios. There are some feature gaps between CLI and K8s implementations. To support more scenarios, like containerd plugins or docker plugins, or just a library for downstream services, we need to make Ratify to be more extensible.
  2. Ratify was designed to validate security metadata of any given artifacts. But right now Ratify is mainly focusing on the K8s scenarios with images being verified.
  3. The plugin framework was designed to separate built-in and external plugins. Built-in plugins run in the same process of Ratify while each external plugin would be executed in a new subprocess. Therefore, external plugins cannot share the in-momory cache with the main process, which may result to performance degradation , data inconsistency, race condition and security vulnerabilties.
  4. The built-in plugins and authentication for different cloud providers are part of the main Ratify repository, which introduces a significant number of dependencies. This issue will become even more pronounced as additional cloud provider and new plugin implementation are added in the future.
  5. Converting config.json from the CLI to Kubernetes CRs is not straightforward for users, increasing the learning cost for new users.

Proposed design

To address the above limitations, one possible v2 architecture design is as below:

    %% Subgraph for inputs
    graph TD
    subgraph Inputs [Inputs]
        direction TB
        C[User Input per Request]
        D["System Input (config/CRD)"]
    end

    %% Subgraph for core logic
    subgraph Core [Core Logic]
        direction TB
        E[core]
    end
    
    %% Subgraph for middleware
    subgraph Middleware [Middlewares]
        B[Driver/Entrypoint]
        F[Output Render]
    end
    
    A[Customized App]

    %% Relationships
    Inputs --> B
    B --> Core
    A -.- B
    A --> Core
    Core --> F
Loading

One example implementation would look like below. Notes: the text in red stands for a repository under Ratify org.
Ratify v1 (8)

The key design principles are as follows:

  1. Extract the Ratify core library(ratify-go) to focus solely on its primary functionality: validating the security metadata of artifacts in an efficient way. Consequently, the mutation API may be removed from the core library. The Ratify core will define required interfaces including Verifier, Store, and PolicyEnforcer but contains no implementation in the repo to minimize the dependencies in the core library.
  2. To implement different plugins for each interface, we can create monorepo for each interface(e.g. ratify-verifier-go). Therefore the dependencies from plugins would just reside in plugin repos. And since store and policyEnforcer interfaces have limited implementations and much less dependencies, we can keep them in the ratify-go repo for now.
  3. Different entrypoints may need a separate repo for each, e.g. ratify-cli is for CLI use case, ratify keeps serving for K8s and ratify-service will behave as a standalone service.
  4. Each entrypoint repo will own the responsibility to select appropriate plugin implementations to build the image or binaries. Through injecting the dependency at build time, entrypoint repos will NOT introduce new dependencies from those plugin implementations. Additionally, plugins will run in the same process as Ratify main process to achieve the best performance and security.
  5. The configuration CRD needs to be redesigned to allow seamless conversion to config.json for other use cases.

Example of dependency injection at build time:

// core
type ReferrerStore interface {
	ListReferrers() string
}
type ReferenceVerifier interface {
	VerifyReference(store ReferrerStore) bool
}
type Executor struct {
	verifiers map[string]ReferenceVerifier
	stores    map[string]ReferrerStore
}
var verifierTypes map[string]ReferenceVerifier
var storeTypes map[string]ReferrerStore

func RegisterStore(name string, store ReferrerStore) {
	fmt.Printf("Registering store %s\n", name)
	if storeTypes == nil {
		storeTypes = make(map[string]ReferrerStore)
	}
	storeTypes[name] = store
}

func RegisterVerifier(name string, verifier ReferenceVerifier) {
	fmt.Printf("Registering verifier %s\n", name)
	if verifierTypes == nil {
		verifierTypes = make(map[string]ReferenceVerifier)
	}
	verifierTypes[name] = verifier
}

func (r *Executor) Verify() {
	r.verifiers["cosign"].VerifyReference(r.stores["oras"])
}

func NewExecutor() *Executor {
	return &Executor{
		verifiers: verifierTypes,
		stores:    storeTypes,
	}
}
/*----------------------------------------------------------*/
// notation verifier
type Verifier struct{}

func (v *Verifier) VerifyReference(store core.ReferrerStore) bool {
	fmt.Println("Notation verifier")
	fmt.Println(store.ListReferrers())
	return true
}

func init() {
	core.RegisterVerifier("notation", &Verifier{})
}
/*----------------------------------------------------------*/
// cosign verifier
type Verifier struct{}

func (v *Verifier) VerifyReference(store core.ReferrerStore) bool {
	fmt.Println("Cosign verifier")
	fmt.Println(store.ListReferrers())
	return true
}

func init() {
	core.RegisterVerifier("cosign", &Verifier{})
}

/*----------------------------------------------------------*/
// oras store
type Store struct{}

func (s *Store) ListReferrers() string {
	fmt.Println("oras")
	return "oras"
}

func init() {
	core.RegisterStore("oras", &Store{})
}

/*----------------------------------------------------------*/
// Ratify main application
package main

import (
	"github.com/binbin-li/ratify-test/core"
)

func main() {
	manager := core.NewExecutor()
	manager.Verify()
}
# Dockerfile
# Use the official Golang image as the base image
FROM golang:1.21-alpine

# Set the working directory inside the container
WORKDIR /app

# Copy the Go modules files
COPY go.mod ./

# Download the Go modules
# RUN go mod download

# Copy the source code
COPY . .

# Add appropriate verifier/store implementation
RUN sed -i '2i\import _ "github.com/binbin-li/ratify-test/cosign"' main.go && \
    sed -i '2i\import _ "github.com/binbin-li/ratify-test/notation"' main.go && \
    sed -i '2i\import _ "github.com/binbin-li/ratify-test/oras"' main.go

RUN go mod tidy

# Build the Go application
RUN go build -o main .

# Command to run the executable
CMD ["./main"]

In the above example, the only dependency of Ratify repo is the Ratify core library. And users could select appropriate interface implementations based on need.

Proposed Repo Layout

  • Keep the Ratify repo for K8s scenario as an external data provider for Gatekeeper.
  • ratify-go (serves as the Ratify core library)
  • ratify-verifier-go (monorepo for different implementations of verifiers)
  • ratify-cli (for CLI user scenario)
  • more repos for other user scenarios in the future.

Proposed milestones

  • Alpha.1: Ratify core library
  • Alpha.2: Create v2 branch in ratify repo, migrating the Executor to the new Ratify core.
  • Beta.1: Implement Oras store and Notation verifier
  • Beta.2: Improve performance, bug fix and add more features.
    • Oras store cache
    • Cosign verifier
  • RC.1: Add missing features from v1
  • GA

Anything else you would like to add?

No response

Are you willing to submit PRs to contribute to this feature?

  • Yes, I am willing to implement it.
@binbin-li binbin-li added enhancement New feature or request triage Needs investigation labels Nov 20, 2024
@binbin-li binbin-li changed the title Ratify v2 proposal Ratify v2 architecture proposal Nov 20, 2024
@FeynmanZhou
Copy link
Collaborator

FeynmanZhou commented Nov 28, 2024

@binbin-li This new design looks promising to me as it decouples the Ratify core from a complex monolithic structure into multiple "microservice-like" components. It makes the Ratify core architecture lightweight and less dependencies, which is much easier for others to contribute or integrate with their systems.

Can you please reflect each repo in the new architecture diagram? It could add more clarity to Ratify new architecture and each's repo functionality. Thus I should be clear about where should I start in Ratify as a new developer/contributor.

@binbin-li binbin-li linked a pull request Dec 2, 2024 that will close this issue
12 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request triage Needs investigation
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants