How we structure a directory per environment in Pulumi
When we keep Pulumi environments separate, we make the environment boundary obvious in the filesystem and keep shared logic outside it.
On this page
The rule
When we use a directory per environment in Pulumi, we want the environment boundary to be obvious just from opening the repo.
Each environment gets its own directory or project entry point, with its own config, its own program surface, and its own small set of environment-specific decisions. Shared logic stays outside the environment directory. Real differences stay inside it.
The goal isn’t perfect reuse. It’s making basic questions easy to answer without mentally evaluating a pile of branching logic. What does dev deploy? What’s different in prd? What changed here? A good structure answers those questions directly.
What lives inside each environment
Each environment directory should contain only what is genuinely owned by that environment.
That usually means the Pulumi project entry point, the stack-specific config, and any small pieces of code that are truly different for that environment. If prd has stricter retention, a different sizing decision, or an extra integration that lower environments do not need, that difference should be visible where the environment is defined.
What we don’t want is a fake environment directory that contains almost nothing except a wrapper around shared logic nobody can understand without opening five other files. If the environment exists as a real boundary, the code should acknowledge that boundary honestly.
The environment directory should be able to tell its own story. Not the whole platform story, but enough that a reviewer can open it and understand what this environment is doing without a small archaeology project.
What stays shared
Shared code still exists. It just doesn’t own the environment boundary.
Reusable components, common defaults, helper functions, naming rules, and small infrastructure building blocks should live outside the environment directories in shared modules. That’s where preferring Pulumi actually pays off. We can centralize the parts that are truly common without pretending the environments themselves are identical.
The environment directory should say what gets deployed in that environment. Shared modules should say how common pieces are built. Once those two responsibilities get blurred together, the filesystem stops helping and the whole thing collapses back into hidden logic.
A simple test helps. If changing a shared module affects multiple environments, that should be expected. If understanding one environment requires tracing through a maze of shared files just to learn what makes it different, the structure has gone too far.
What we avoid
We avoid splitting environment behavior across too many places. If the difference between dev and prd is real, it should be easy to see where that difference is defined.
We also avoid creating a shared wrapper so early that every environment directory becomes a thin shell around the same program with a pile of flags. That usually means the directory boundary is decorative, not real. It is usually the same failure mode as abstracting repeated Pulumi code too early.
We avoid copying shared helpers into each environment. If a naming rule, resource pattern, or small component is genuinely common, it should be shared properly. Duplication is fine at the environment layer. Random copy-paste of common mechanics is just sloppy.
We also avoid forcing every environment to look equally “complete.” If a small system doesn’t need full dev, stg, and prd parity, the directory structure should not imply otherwise. Filesystem neatness isn’t a valid reason to pay for more infrastructure.
What this looks like in practice
A common shape looks like this:
infra/
gcp/
shared/
naming.ts
tags.ts
network.ts
database.ts
app-service.ts
project/
dev/
Pulumi.yaml
Pulumi.dev.yaml
index.ts
stg/
Pulumi.yaml
Pulumi.stg.yaml
index.ts
prd/
Pulumi.yaml
Pulumi.prd.yaml
index.ts shared/ holds the common building blocks. project/dev, project/stg, and project/prd define what each environment actually does with them. If prd needs a larger database class, stricter backups, or an extra integration, that should be visible in project/prd/index.ts or its stack config. It should not be hidden behind a conditional buried deep in a shared helper unless that difference is truly part of a reusable rule.
Done well, this gives you environment entry points that are small, readable, and explicit. They import shared modules, pass environment-specific values, and make the few real decisions that belong at that layer.
Why we use it this way
This structure makes review cheaper.
A reviewer can look at the environment directory and understand the scope of the change. A maintainer can open prd and see what is special about production. A refactor in shared code is easier to evaluate because the shared layer is actually reserved for common behavior.
It also keeps the path open to consolidate later. If the environments really are becoming structurally similar, the shared layer gets richer over time and the environment directories get thinner for honest reasons. That’s much easier than unwinding a premature shared program that started hiding real differences too early.
The point of a directory per environment isn’t to avoid reuse. It’s to keep the environment boundary real until the system has actually earned something more compressed.
More in this domain: Infrastructure
Browse allHow we decide between Cloud SQL connectors, Auth Proxy, and private IP
Cloud SQL connectors, the Auth Proxy, and private IP are not interchangeable secure connection options. They change identity, routing, deployment shape, and how much network plumbing the team actually owns.
IAM DB auth for Cloud SQL: when it simplifies security and when it complicates delivery
IAM DB auth can reduce password sprawl and make revocation cleaner, but it also turns database access into an identity operating model that depends on disciplined service-account boundaries.
Safe scaling defaults for Cloud Run + Postgres
Cloud Run autoscaling is not a database strategy. Safe defaults keep the application from scaling itself into a Postgres incident before the team understands the workload.
Cloud Run request timeouts don't kill your code (so your architecture has to)
A Cloud Run request timeout ends the request, not necessarily the work. If the operation can outlive its caller, the system needs explicit job semantics instead of hope.
Cloud Run scaling from zero is a feature until it isn't
Scale to zero is a good default for request-driven services, until startup delay, warm-capacity needs, or instance caps turn it into user-visible reliability behavior instead of a pricing feature.
Related patterns
What goes in Pulumi stack config and what doesn't
We use Pulumi stack config for environment-specific values, not as a hiding place for infrastructure logic.
When repeated Pulumi code earns abstraction and when it doesn't
We don't abstract repeated Pulumi code just because it shows up more than once. We do it when the shared shape is real, the behavior is stable enough to deserve a boundary, and the result is easier to read than the duplication it replaces.
How we decide between directory per environment and shared stacks in Pulumi
We do not force DRY across environments by default. We keep Pulumi environments separate until shared code, shared rules, and drift risk make consolidation cheaper than duplication.
Why we usually choose Pulumi over Terraform
Pulumi is our default when infrastructure starts behaving like software. Existing Terraform estates can still be the better decision when the migration cost is higher than the operational gain.