How we decide between Cloud SQL connectors, Auth Proxy, and private IP
Cloud SQL connectors, the Auth Proxy, and private IP are not interchangeable secure connection options. They change identity, routing, deployment shape, and how much network plumbing the team actually owns.
On this page
Cloud SQL connectors, the Auth Proxy, and private IP are often discussed like three equivalent ways to get a secure database connection. That framing is too soft to be useful. They are not three badges on the same design. They move responsibility into different layers of the system. With connectors, more of the Cloud SQL-specific behavior sits close to the application runtime. With the Auth Proxy, that behavior is pushed into a companion process. With private IP, the network path carries much more of the meaning and the application sees a more ordinary database endpoint.
A connector problem sends debugging into runtime integration, credentials, token refresh, client support, or library behavior. A proxy problem sends debugging into one more process, one more hop, one more config surface, and one more place to ask whether startup or restart behavior drifted. A private IP problem sends it into VPC path, egress, DNS, firewalling, and whether the service was ever on the right network in the first place. Those are different failure modes, different runbooks, and different ownership choices. Treating them like a feature menu usually leads to a bad argument and a worse default.
For Cloud Run-first systems, that matters more than it first sounds. Cloud Run already has opinions about service identity, startup behavior, egress, and how much network plumbing the platform wants to carry. Database connectivity does not sit outside those choices. It gets tangled into them immediately.
Start with the layer you want to own
The cleanest way to decide is to state the boundary plainly.
If the service should carry more of the identity and connection setup logic inside the runtime, connectors deserve a serious look. If the application should keep speaking ordinary Postgres while a separate process handles Cloud SQL-specific setup, the Auth Proxy is still a real option. If the main control plane should be the network path and the application should see a plain host and port, private IP is usually the better frame.
decision surface connector auth proxy private IP
identity lives in app/runtime proxy + runtime app + network
routing owned by platform helper network + proxy VPC design
extra moving process no yes no
client stays generic no mostly yes yes
best fit when IAM-first app mixed tooling private GCP path Most of the fuzzy debate goes away once that boundary is named plainly. “Private IP is more secure,” “connectors are easier,” and “the proxy is old school” are not precise enough to decide anything. The system should stay legible under pressure in one of three places: runtime, proxy, or network.
When connectors are the right fit
Connectors make the most sense when workload identity is part of the design rather than just an implementation detail. They keep Cloud SQL-specific connection setup close to the application, which is useful when the runtime is already deeply shaped by Cloud Run and the service estate wants service identity, ephemeral credentials, and connection establishment to stay near the code that is actually doing the work.
Connectors sit naturally beside IAM DB auth. If the service account is already the real trust boundary and the database login should follow that same identity model, connectors can produce a cleaner story than bolting the same idea on through scripts, sidecars, or a tangle of manually managed credentials.
INSTANCE_CONNECTION_NAME=project:region:db
DB_IAM_AUTH=true
DB_NAME=app
# app code uses the Cloud SQL connector for postgres The application is no longer just a generic Postgres client. It is now using a provider-aware path, and that changes both portability and debugging. When something looks wrong, the check is often checking connector support, runtime behavior, metadata access, token refresh, or how the client library integrates with the rest of the app. The complexity moved closer to the runtime. That is not automatically worse.
Moving complexity into the runtime usually pays off when the identity boundary is doing real work. It is much less attractive when the same codebase is expected to run across several environments with minimal provider-specific behavior, or when the application would be calmer if it could keep the database side boring and let the network carry more of the load.
Choose connectors because runtime identity really is the connection story, not because “managed” sounds nicer. Otherwise the codebase picks up more coupling than the design actually needed.
When the Auth Proxy still earns its place
The Auth Proxy remains useful because it solves a different problem. It lets the application keep speaking plain Postgres while a separate process handles the Cloud SQL-specific part of reaching the instance. It can still be the calmest option when the codebase should not absorb connector-specific behavior, when mixed runtimes are involved, or when local development and operational tooling benefit from the application seeing something like 127.0.0.1:5432 instead of a more specialized integration path.
Keeping the database client generic is the proxy’s actual value. It is not there to feel legacy or familiar. Some systems simply benefit from keeping the client side plain even when the platform path is not.
./cloud-sql-proxy project:region:db --port 5432
DATABASE_HOST=127.0.0.1
DATABASE_PORT=5432
DATABASE_USER=app
DATABASE_NAME=app The proxy is another process, another restart surface, another place to log, another thing to deploy, and another place for configuration drift to settle. In local development that may be trivial. In production it is one more component teams own, whether or not they want to admit it.
It also does not solve unrelated problems just because it sits between the application and the database. It is not connection pooling. It does not make bad pool math go away. If the app opens fifty sessions through the proxy, the database still sees fifty sessions. The proxy belongs next to a connection budget, not in place of one.
The weak version of the proxy choice is inertia. A prototype used it, the stack grew up around it, and nobody came back to ask whether it was still earning the extra process. The stronger version is more honest: compatibility matters, tooling matters, and keeping the application side plain Postgres is worth the additional runtime surface.
What private IP actually changes
Private IP is not just one more secure path. It changes where the system is expected to make sense. Once private IP is the chosen boundary, the application starts looking at a normal private database address and the network path carries far more of the responsibility. Routing, DNS, firewalling, egress, and VPC design stop being side concerns and become the main thing that has to be right.
For internal platforms, that is often exactly the point. If the service already belongs on a private GCP path, private IP usually produces the most boring connection story, which is often the best kind. The application sees a host and port. The network boundary does the work. There is no extra proxy process, and the client can remain an ordinary Postgres client instead of absorbing Cloud SQL-specific behavior unless there is a real reason to do so.
DATABASE_HOST=10.42.0.15
DATABASE_PORT=5432
DATABASE_SSLMODE=verify-full
DATABASE_SSLROOTCERT=/etc/ssl/certs/server-ca.pem The network has to deserve that simplicity. Cloud Run needs a sound egress path. The VPC design needs to be right. DNS and firewalling need to be right. That naturally leans the decision toward Direct VPC egress and whether the service is actually internal-only in a meaningful sense instead of just on a diagram.
Private IP is usually the calmest option when teams are comfortable owning the network boundary and want the application side to stay unremarkable. It is less attractive when the network is the part the platform keeps hoping not to think about.
Failure shape matters more than steady-state diagrams
A lot of architecture decisions look fine while the service is healthy. What matters more is how the path fails and whether teams already know how to explain that kind of failure.
Connector issues tend to fail as runtime integration or identity-path problems. Proxy issues tend to fail as companion-process or local-hop problems. Private IP issues tend to fail as network reachability problems. The best option is often the one that becomes legible in the part of the system teams already know how to debug without drama.
when the symptom is... first suspicion by path
login fails with IAM setup connector or auth model
local port works, DB does not proxy process or network behind it
host cannot be reached VPC path, egress, firewall, DNS
random startup delay egress path or connection helper startup For small teams, this matters a lot. A path that looks elegant in a design review but becomes opaque during a bad deploy is usually the wrong path. The system does not care which option sounded cleaner in the meeting. It cares which one can be understood under pressure.
The default for Cloud Run-first systems
For Cloud Run-first internal platforms, the default is the simplest honest path. If the service already belongs on a private GCP network and there is no strong reason to push identity logic into the client side, private IP with a direct connection is usually the best starting point. It keeps the path boring, avoids an extra process, and lets the application behave like a normal Postgres client while the network does the heavy lifting.
If the workload is deliberately IAM-oriented and the connection model is supposed to follow workload identity closely, connectors become more attractive. They are not the default because they are fashionable. They are the default only when the runtime identity boundary is doing enough real work to justify putting Cloud SQL-specific behavior close to the application.
The Auth Proxy earns its place when compatibility, local tooling, mixed runtimes, or operational preference make that extra process worthwhile. It is not an embarrassing fallback. It is simply a more specific answer than “we need a secure connection.”
our default ladder
1. private IP direct path for private Cloud Run systems
2. connector path when IAM-oriented app identity is part of the design
3. Auth Proxy when compatibility and tooling make the extra process worth it When we break the default
The default breaks whenever one boundary matters a lot more than the others. A tightly network-controlled internal platform may lean hard into private IP everywhere. A workload estate built around IAM-first service identity may lean much more heavily on connectors. A mixed runtime environment may keep the Auth Proxy longer because boring client behavior matters more than shaving off one extra process.
Runtime shape matters here too. Startup sensitivity, bursty scale, and tight request windows all change the cost of connection-path choices. The calmer option is often the one that works cleanly with scaling defaults, not the one that won an abstract comparison.
If the broader database product is still unsettled, then the frame gets bigger than this page. At that point the question becomes Cloud SQL versus AlloyDB, not a narrow argument about which helper sits in front of the connection.
What usually goes wrong in this decision
The first mistake is deciding with adjectives. “Private is more secure.” “Connectors are easier.” “Proxy is old.” None of that survives contact with a production incident. The choice has to be stated in terms of identity, routing, runtime surface, and debugging path or it is still too vague.
The second mistake is pretending database connectivity is separate from the rest of the runtime. It is tied to egress, startup, service identity, local parity, scaling behavior, and incident response immediately. Picking the database path in isolation is how estates end up with something that looked tidy in design notes and behaves awkwardly in the real system.
The third mistake is inertia. Proxy because it was there first. Connector because it feels official. Private IP because “private” sounds serious. None of those are reasons. The method should earn its place by making one boundary meaningfully cleaner than the others.
We optimize for a path that stays understandable when the service is under pressure. The app, auth model, and network should all be telling the same story. The connection method should not create a tiny side platform unless the service is actually benefiting from it. In a Cloud Run-first estate, there is already enough runtime surface to keep track of. Database connectivity should remove ambiguity, not add another place to hide it.
Cloud SQL connectors, the Auth Proxy, and private IP are not interchangeable secure connection options. They are different ownership decisions. Pick the one that puts complexity in the layer the platform actually wants to own. If that layer has not been named clearly, the choice probably was not made. It was inherited.
More in this domain: Infrastructure
Browse allIAM DB auth for Cloud SQL: when it simplifies security and when it complicates delivery
IAM DB auth can reduce password sprawl and make revocation cleaner, but it also turns database access into an identity operating model that depends on disciplined service-account boundaries.
Safe scaling defaults for Cloud Run + Postgres
Cloud Run autoscaling is not a database strategy. Safe defaults keep the application from scaling itself into a Postgres incident before the team understands the workload.
Cloud Run request timeouts don't kill your code (so your architecture has to)
A Cloud Run request timeout ends the request, not necessarily the work. If the operation can outlive its caller, the system needs explicit job semantics instead of hope.
Cloud Run scaling from zero is a feature until it isn't
Scale to zero is a good default for request-driven services, until startup delay, warm-capacity needs, or instance caps turn it into user-visible reliability behavior instead of a pricing feature.
Direct VPC egress vs Serverless VPC Access for Cloud Run: our default
We default to Direct VPC egress for Cloud Run because it is the cleaner networking shape: fewer moving parts, no connector resource, and costs that scale with the service instead of beside it.
Related patterns
"Internal-only" Cloud Run isn't just a checkbox
Making a Cloud Run service private is not one toggle. It is a decision about ingress, routing, caller path, and IAM working together as one access model.
GKE Autopilot as the escape hatch from Cloud Run
When Cloud Run stops fitting, the next move is usually GKE Autopilot: more Kubernetes-shaped control without immediately taking on the full burden of Standard clusters.
Why we default to Cloud Run for SME internal platforms
For SME internal platforms, Cloud Run is our default because it covers a large share of useful workload shapes without forcing teams to own cluster operations before they have earned that surface area.
Why Cloud Run + Postgres needs a connection budget
Cloud Run and Postgres get fragile when connection growth is left implicit. We treat connections as a finite runtime budget, not as plumbing the app can multiply without consequence.