Dynamic Workflows: Enabling Per-Tenant Durable Execution on Cloudflare
Cloudflare's platform has evolved from a direct-to-developer service to a powerful multi-tenant ecosystem where platforms can enable their customers to deploy code dynamically. The latest advancement, Dynamic Workflows, bridges the gap between durable execution and dynamic deployment, allowing each tenant, agent, or session to have its own workflow class. This article answers key questions about this new capability, its predecessors, and the problems it solves.
What Is Dynamic Workflows and Why Was It Created?
Dynamic Workflows is a new feature from Cloudflare that combines the durability of Workflows with the flexibility of dynamic deployment. Previously, Workflows required that the workflow code be part of your deployment – a single class bound in your wrangler.jsonc. This worked well for traditional applications where you own all the code. However, many modern platforms let customers ship their own logic. For example, an app platform where AI writes TypeScript for each tenant, or a CI/CD product where each repository defines its own pipeline. In these cases, there is no single workflow class. Dynamic Workflows solves this by letting you hand the runtime code at runtime, getting an isolated, sandboxed Worker with its own durable execution engine that follows the tenant.

How Did Cloudflare's Platform Evolve to Need Dynamic Execution?
When Cloudflare first launched Workers eight years ago, it was a direct-to-developers platform. Over time, the ecosystem expanded so that platforms could build on Workers and also enable their customers to ship code through multi-tenant applications. Now we see use cases like AI-generated implementations, multi-tenant SaaS where each customer's business logic is TypeScript the platform has never seen before, agents that write and run their own tools, and CI/CD products where every repo defines its own pipeline. This evolution created a need for primitives that allow runtime code injection, isolated storage, and per-tenant source control. The Dynamic Workers open beta gave platforms a clean compute primitive, followed by Durable Object Facets for storage and Artifacts for source control. Now Dynamic Workflows extends this to durable execution.
What Are the Key Components That Preceded Dynamic Workflows?
Three key components set the stage. First, Dynamic Workers (open beta last month) gave platforms a primitive for compute: hand the Workers runtime some code at runtime and get an isolated, sandboxed Worker on the same machine in single-digit milliseconds. Second, Durable Object Facets extended the same idea to storage – each dynamically-loaded app can have its own SQLite database, spun up on demand, with the platform as a supervisor. Third, Artifacts applied the concept to source control: a Git-native, versioned filesystem that can be created by the tens of millions, one per agent, session, or tenant. These solved dynamic deployment for compute, storage, and source control. The missing piece was durable execution – the ability to run long-running processes that can wait, sleep, and resume across failures, tailored per tenant. That's where Dynamic Workflows comes in.
What Is the Gap Between Durable and Dynamic Execution That Dynamic Workflows Solves?
Cloudflare Workflows is a durable execution engine that turns a run(event, step) function into a program where every step survives failures, can sleep for hours or days, waits for external events, and resumes exactly where it left off. It's ideal for onboarding flows, video transcoding, multi-stage billing, and long-running agent loops. However, Workflows had a baked-in assumption: the workflow code is part of your deployment. Your wrangler.jsonc specifies one binding and one class per deploy. That works if you own all the code, but fails when you want to let customers ship their own workflows. For example, an app platform where AI writes TypeScript for each tenant – there's no single class to bind. Dynamic Workflows bridges this gap by allowing each tenant to have its own unique workflow class, dynamically loaded at runtime, while still gaining all the durability benefits of the Workflows engine.

How Does Cloudflare Workflows Work and What Was Its Limitation?
Cloudflare Workflows is designed for durable execution. It wraps a function with steps that are automatically retried on failure, can sleep for extended periods, and resume from the last completed step. As of Workflows V2, it supports up to 50,000 concurrent instances and 300 new instances per second per account, making it suitable for the agentic era. The core mechanism is a binding in your deployment configuration that points to a specific workflow class. This works perfectly for traditional applications where you control the code. The limitation arises in multi-tenant scenarios: if you are building a platform that lets each customer define their own workout logic, you cannot hardcode a single workflow class. Each customer's workflow is unique, created dynamically at runtime. Before Dynamic Workflows, you would have to manage separate deployments for each tenant, which is inefficient and complex.
What Are Some Use Cases for Dynamic Workflows?
Dynamic Workflows unlocks several powerful scenarios. Consider an app platform where an AI agent writes TypeScript code for each tenant's unique business process – that code can now be transformed into a durable workflow that survives failures and runs long operations, with no manual deployment. In a CI/CD product, each repository can define its own pipeline as a workflow class, allowing per-repo customization without overloading the platform. For agents SDKs, each agent can write its own durable plan and execute it as a workflow, with the platform managing state and retries. Other use cases include multi-tenant SaaS platforms that need per-customer billing or onboarding flows, and any system where tenants bring their own logic that must run reliably over hours or days. Dynamic Workflows makes it possible to give every tenant a fully isolated, durable execution environment without sacrificing performance or developer experience.
How Does Dynamic Workflows Differ From the Previous Static Workflow Binding?
Previously, Workflows used a static binding: you declared one workflow class per deployment in your wrangler.jsonc. This meant that all instances of that workflow ran the same code. To have different logic for different tenants, you would need separate deployments or complex branching within the single class. Dynamic Workflows eliminates that constraint. Now, you can load workflow code at runtime, per tenant, per agent, or per request. The platform hands the runtime the code and gets back an isolated Worker with its own durable execution engine. The same Workflows engine underneath handles retries, sleeping, and resumption, but the code itself is dynamic. This is analogous to how Dynamic Workers gave per-tenant compute and Durable Object Facets gave per-tenant storage – Dynamic Workflows gives per-tenant durable execution, completing the trio of dynamic deployment primitives.
Related Articles
- Kubernetes v1.36 Delivers Urgent Staleness Fixes: New Observability Tools Reveal Controller Blind Spots
- Cloudflare Unveils Dynamic Workflows: Durable Execution Now Adapts to Every Tenant
- Kubernetes v1.36 Overhauls Memory Management: Tiered Protection and Opt-In Reservation Go Alpha
- Navigating Ingress-NGINX Quirks: What to Know Before Migration
- Unpacking Anthropic’s Meteoric Rise: Where the $30 Billion ARR Really Comes From
- Expanding Azure Local: Sovereign Private Cloud Now Supports Thousands of Nodes
- Azure Local Enables Microsoft Sovereign Private Cloud to Handle Thousands of Servers
- How Docker Hardened Images Rescue ClickHouse Deployments Blocked by Security Scanners