Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support multiple workers in a single Miniflare instance #7251

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

penalosa
Copy link
Contributor

@penalosa penalosa commented Nov 13, 2024

This PR allows you to run multiple workers in a single Miniflare instance, allowing for cross-service RPC without the complexities of the dev registry. To try it out, pass multiple -c flags to Wrangler: i.e. wrangler dev -c wrangler.toml -c ../other-worker/wrangler.toml. The first config will be treated as the primary worker and will be exposed over HTTP as usual (localhost:8787) while the rest will be treated as secondary and will only be accessible via a service binding from the primary worker.

Screenshot 2024-11-13 at 17 50 02


  • Tests
    • TODO (before merge)
    • Tests included
    • Tests not necessary because:
  • E2E Tests CI Job required? (Use "e2e" label or ask maintainer to run separately)
    • I don't know
    • Required
    • Not required because:
  • Public documentation
    • TODO (before merge)
    • Cloudflare docs PR(s):
    • Documentation not necessary because:

@penalosa penalosa requested a review from a team as a code owner November 13, 2024 17:50
Copy link

changeset-bot bot commented Nov 13, 2024

⚠️ No Changeset found

Latest commit: d632e70

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

Copy link
Contributor

github-actions bot commented Nov 14, 2024

A wrangler prerelease is available for testing. You can install this latest build in your project with:

npm install --save-dev https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/11964785987/npm-package-wrangler-7251

You can reference the automatically updated head of this PR with:

npm install --save-dev https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/prs/7251/npm-package-wrangler-7251

Or you can use npx with this latest build directly:

npx https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/11964785987/npm-package-wrangler-7251 dev path/to/script.js
Additional artifacts:
npx https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/11964785987/npm-package-create-cloudflare-7251 --no-auto-update
npm install https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/11964785987/npm-package-cloudflare-kv-asset-handler-7251
npm install https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/11964785987/npm-package-miniflare-7251
npm install https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/11964785987/npm-package-cloudflare-pages-shared-7251
npm install https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/11964785987/npm-package-cloudflare-vitest-pool-workers-7251
npm install https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/11964785987/npm-package-cloudflare-workers-editor-shared-7251
npm install https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/11964785987/npm-package-cloudflare-workers-shared-7251
npm install https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/11964785987/npm-package-cloudflare-workflows-shared-7251

Note that these links will no longer work once the GitHub Actions artifact expires.


[email protected] includes the following runtime dependencies:

Package Constraint Resolved
miniflare workspace:* 3.20241106.1
workerd 1.20241106.1 1.20241106.1
workerd --version 1.20241106.1 2024-11-06

Please ensure constraints are pinned, and miniflare/workerd minor versions match.

@penalosa penalosa added the e2e Run e2e tests on a PR label Nov 14, 2024
@irvinebroque
Copy link
Contributor

What would we need in order for this to work without the -c flag? Something in wrangler.toml that told you the directory where the Worker that you're bound to lives?

(not saying we need to solve for it rn, just curious what the config decision we're facing is)

@penalosa
Copy link
Contributor Author

penalosa commented Nov 18, 2024

What would we need in order for this to work without the -c flag? Something in wrangler.toml that told you the directory where the Worker that you're bound to lives?

Yeah, something like directory = "../other-worker/wrangler.toml" in a service binding definition would probably be enough. What I think would also be quite cool is if Wrangler detected monorepo configs (NPM/PNPM workspaces etc...) and managed to wire up workers automatically based on their service binding graph somehow, so running wrangler dev in the root of a monorepo with no additional config was enough to get this working

@andyjessop
Copy link
Contributor

The problem with adding a new field to services in wrangler.toml is that you're then encoding this idea of a primary worker, whereas you might want to start up one of the bound workers as a primary worker to work with it directly. It might be useful also to think about a true multi-worker config, where you don't have a single primary worker, and where you can start any of the defined workers as the primary. E.g.:

{
  ...
  workers: [
    {
      name: "a",
      main: "../a/a.ts",
      bindings: { c: { "type": "worker", name: "c" } },
    },
    {
      name: "b",
      main: "../b/b.ts",
      bindings: { c: { "type": "worker", name: "c" } },
    },
    {
      name: "c",
      main: "../c/c.ts",
    },
  ]
}
# starts worker "a" at http:localhost:8787 with "c" accessible only through RPC
npx wrangler dev --worker a

# starts worker "b" at http:localhost:8787 with "c" accessible only through RPC
npx wrangler dev --worker b

# starts worker "c" at http:localhost:8787
npx wrangler dev --worker c

Multi-worker is a new paradigm for wrangler, so I would worry about adding it to the single-worker config as a patch in case we're painting ourselves into a corner. A new style config could be launched as a minor patch and would force us to tidy the unwieldy config in the code base.

(I don't say the above to derail the PR - @penalosa's solution is a neat and flexible one that doesn't tie us to any future arrangement).

Copy link
Contributor

@andyjessop andyjessop left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is amazing work - very nice observation that we essentially already have this functionality in Miniflare and it just needed hooking up 👍

Let's start the conversation on whether or not users will still want/need workers to be accessed from distinct endpoints, or if the dev registry is something we could remove entirely (perhaps in v4?).

);
});
});
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be good to test some failure cases here.

  • What if a bound worker fails - do calls return the right response?
  • What if the primary worker fails - do the bound workers shut down correctly?

port: parseInt(url.port),
})
);
if (Array.isArray(configPath)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like we could clean things up if we just treated all scenarios as multi-worker. So instead of having a separate MultiworkerRuntimeController, just modify the LocalRuntimeController to handle multi-workers, and then cast the configPath to an array in the yargs definition.

const configs = Array.isArray(configPath) ? configPath : [configPath];

// Now no need to handle the single worker case. LocalRuntimeController assumes multi.
const runtime = new LocalRuntimeController(configs.length);

Or is there a reason to keep the single worker code path?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went down this path in Merge Local & Multi controllers, but I'm starting to get a bit concerned about the potential for disrupting the normal dev flow, and I think it might end up being more complexity than it's worth. The single worker dev flow has two runtimes, remote & local, and the remote runtime doesn't (and will never) work for the multiworker use case. Because of that there's some differences in how these flows are set up (as well as the disabling of the dev registry in the multiworker flow, which involves disabling the registry as well as a change to how DOs & entrypoints are registered). It's not that much duplicated code, and I think it's actually clearer to keep them separate, and easier to reason about. What do you think?

@zhihengGet
Copy link

zhihengGet commented Nov 23, 2024

does this support pages , i tried wrangler pages dev .svelte/cloudfare -c ../packages/wranger.toml ... but got Pages does not support custom paths for the wrangler.toml configuration file

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
e2e Run e2e tests on a PR
Projects
Status: Untriaged
Development

Successfully merging this pull request may close these issues.

4 participants