← Back to Patterns

Direct VPC egress vs Serverless VPC Access for Cloud Run: our default

We default to Direct VPC egress for Cloud Run because it is the cleaner networking shape: fewer moving parts, no connector resource, and costs that scale with the service instead of beside it.

By Ivan Richter LinkedIn

Last updated: Mar 25, 2026

4 min read

On this page

The default

We default to Direct VPC egress for Cloud Run.

At this point, it’s the cleaner Cloud Run networking shape. It keeps the service attached to the VPC without dragging a connector resource into every deployment by default. Fewer moving parts. Fewer things to size, explain, secure, and pay for on the side.

A connector isn’t evil. It just isn’t where we want to start if Cloud Run can already do the simpler thing directly.

Why the simpler shape wins

The main advantage isn’t philosophical. It’s operational.

With Direct VPC egress, the networking story stays closer to the service. There is no separate connector sitting beside it with its own lifecycle and its own cost shape. The service talks to the VPC, and the infrastructure diagram stays closer to what’s actually running.

That also keeps the cost model cleaner. With connectors, you aren’t just paying for the service. You’re also carrying connector capacity as its own thing. Direct VPC egress removes that extra layer, which is a better default for small teams and small platforms.

Security and ownership get cleaner too

The security story gets cleaner for the same reason.

With Direct VPC egress, network tags can be attached to the Cloud Run workload itself instead of being pushed through connector-level infrastructure. That makes firewall intent easier to follow because the policy sits closer to the thing that actually owns the traffic.

It doesn’t magically simplify network design. You still need to decide what should reach what. But it’s a cleaner place to hang that decision than a shared connector people stop thinking about once it’s in the graph, especially once internal-only ingress, private reachability, egress mode, and VPC routing all start depending on the same boundary being set cleanly.

The caveats are real

This isn’t a “Direct VPC perfect, connectors bad” argument. Direct VPC egress has real caveats, and they’re worth taking seriously. Startup can be awkward. Connectivity to the egress destination can take a while to come up on a fresh instance. Throughput is capped per instance. There are quotas on how many instances can use Direct VPC egress. Networking maintenance can still break connections, which means client behavior needs to tolerate resets instead of acting surprised every time infrastructure behaves like infrastructure.

So the win here isn’t that you get to stop thinking. The win is that the default shape is simpler while you’re thinking.

Subnets stop being background detail

Cloud Run allocates IP addresses from the subnet you attach. That means subnet size isn’t decorative anymore. If the service scales up, rolls to a new revision, or uses jobs aggressively, IP consumption becomes part of whether the service can start cleanly. At that point, “network config” isn’t separate from runtime behavior. It’s runtime behavior.

That’s one of the more useful side effects of Direct VPC egress. It forces the network boundary to be honest. If the subnet is too small or the IP plan is sloppy, the platform tells you directly instead of hiding the problem behind another resource.

The egress mode still matters

Direct VPC egress doesn’t remove the actual routing decision.

You still need to choose whether the service should send only private ranges through the VPC or send all traffic through it. That isn’t console trivia. It changes what the service depends on, what paths are private, and where failure or latency can show up.

Why this is a good default for small teams

For SME internal platforms, the default should reduce platform drag.

That’s why this fits naturally with Cloud Run as the default. If the service can live comfortably inside the Cloud Run model, the VPC story should feel like part of that same low-ownership runtime. It shouldn’t turn into a side quest in connector management before the workload has earned that complexity.

Direct VPC egress gets us closer to that shape.

When the default stops being enough

Sometimes the surrounding system stops being simple enough for the simplest shape.

Maybe private networking assumptions are spreading across a lot of services. Maybe service-to-service topology is getting denser. Maybe the network design wants more cluster-shaped constructs, more involved east-west traffic, or a broader container estate where Cloud Run is no longer the whole picture. In that kind of setup, GKE Autopilot often becomes the cleaner fit. Not because Direct VPC egress failed, but because the system around it stopped being mostly Cloud Run-shaped.

The point

We default to Direct VPC egress because it’s the cleaner Cloud Run networking shape.

It removes connector infrastructure from the normal path, keeps costs and security controls closer to the service, and lowers platform drag. If the caveats matter more than the simplicity, we can make a different choice. Until then, the default should stay simple.

More in this domain: Infrastructure

Browse all

Related patterns