How we decide which metrics deserve a dashboard and which deserve a workflow
Some metrics are for observation. Others need ownership, thresholds, timing, and structured action. We decide explicitly which system shape each metric actually deserves.
On this page
The decision frame
We don’t treat dashboards as the default home for every metric.
A lot of teams do, mostly because dashboards are familiar and easy to ask for. The result is predictable. Metrics that only need observation get mixed together with metrics that really need ownership, thresholds, timing, acknowledgement, and some kind of response path. Then the dashboard starts pretending to be an operating system, and the workflow layer gets built on top of numbers that were never stable enough to carry action in the first place.
That is how you end up with reporting that feels too heavy and operations that feel too vague. The metric matters in both places, but the job is different in each one. The useful question isn’t whether the number is important. It’s what the organization expects to happen when it moves.
Observation and intervention are different jobs
Some metrics exist to help people understand the system. Others exist to make the system do something.
A dashboard is good at the first job. It helps with trend review, historical comparison, periodic reporting, management context, and the kind of slower decision-making where people are trying to understand how something has been behaving over time. It gives the number a readable surface.
A workflow is built for the second job. It routes action, assigns ownership, puts time around the response, records what happened, and gives the system some way to tell whether the loop was closed. That is a very different shape, even when both systems happen to use the same metric.
Most confusion starts when people treat those shapes like they’re interchangeable.
What usually belongs in a dashboard
A metric is usually dashboard-first when the number is there to inform, compare, or explain.
That includes trend tracking, period-over-period comparison, segmentation, benchmarking, management review, and metrics that help a team understand the state of the business without demanding immediate response every time they move. These numbers still matter. Sometimes they matter a lot. They just don’t need the organization to spring into motion every time the line wiggles.
A lot of good reporting lives here. The number sharpens judgment. It helps people see patterns, spot drift, and ask better questions. It makes the room less stupid. That is enough. Not every important metric needs a siren strapped to it.
What usually belongs in a workflow
A metric becomes workflow-worthy when the number itself isn’t the end of the story.
Once a threshold crossing should create work, once somebody needs to own the response, once timing matters, once acknowledgement or escalation would improve the outcome, the metric has crossed out of pure reporting. At that point visibility is still useful, but it isn’t the hard part anymore. The hard part is making sure the system can actually carry what is supposed to happen next.
That might mean an alert. It might mean a queue. It might mean a task, an escalation path, or some writeback into the source system. The exact tool can vary. The common thread is that the number is no longer just something to look at. It is now part of a control loop. That’s not a job for a dashboard.
A metric can matter and still be a bad trigger
A metric matters, so somebody decides it should drive automation or operational routing. Sometimes that’s right. Sometimes it’s an efficient way to automate confusion. A number can be strategically important and still be a terrible trigger because it restates later, depends on shaky definitions, changes under late-arriving data, or falls apart the moment people try to act on it in real time.
Before a metric starts creating work, it needs to be stable enough to carry consequences. A lot of the real design work sits in deciding what makes a KPI trustworthy enough to automate around without creating noise, bad incentives, or pointless churn.
If the team still has to reinterpret the number every time it moves, it belongs in reporting until that problem is fixed.
Speed only matters if the response does
Freshness gets confused with actionability.
Sometimes a metric gets pushed toward a workflow simply because someone wants it faster. That is usually backward. A number does not become operational just because it updates every five minutes. If nobody knows what action the update is supposed to trigger, all you have done is make the reporting layer more impatient.
Hybrid cases are normal
A lot of useful metrics belong in both places.
The dashboard gives the number history, comparisons, and enough context for people to understand what’s going on. The workflow handles the moments where the number crosses from observation into intervention. That is often the cleanest setup because it lets reporting do reporting work and operations do operations work.
The mistake is not hybrid use. The mistake is pretending one surface can quietly absorb both jobs without changing its semantics.
A dashboard can explain why the number matters. A workflow can decide what happens because it mattered.
How we usually make the call
We are usually asking a few plain questions, not running some grand framework.
Is the metric mainly descriptive, or is somebody supposed to act when it moves? Is the action time-sensitive, or is the value in review and context? Is there a real owner for the response, or only a vague hope that “the team” will handle it? Would acknowledgement, escalation, or structured outcomes improve the system? Would bad definitions or noisy restatements make automation dangerous?
Those questions usually tell you where the metric belongs faster than any reporting taxonomy will. If intervention is vague, keep it in the dashboard. If intervention is real, give it a system that can actually carry it.
The point
If a metric needs intervention, it deserves more than visibility.
Dashboards are good at showing state. Workflows are good at moving it. A lot of reporting confusion comes from asking one to quietly become the other.
More in this domain: Reporting
Browse allBI Engine: when it matters, when it's a trap
BI Engine can be useful, but only after you prove it is actually accelerating the workload you care about. Otherwise it turns into configuration thrashing around the wrong problem.
Precompute ladder: cache -> scheduled tables -> MVs -> extracts
Precompute is not mainly a feature choice. It is a freshness budget decision: use the cheapest mechanism that meets the reporting need, then stop paying live query cost out of habit.
Why your BI dashboards melt BigQuery
Dashboards do not passively read data. They generate repeated, variable workload, and that behavior is often the real source of BigQuery cost and latency pain.
A dashboard is not an operating system
Dashboards are good at showing state. They are bad at routing action, assigning ownership, and closing operational loops once a metric requires intervention.
Looker Studio blending limits expose your real data model problems
When a report starts depending on heroic Looker Studio blending, the issue is usually upstream structure, not dashboard craftsmanship.
Related patterns
What makes a KPI trustworthy enough to automate around
A KPI is not ready to drive action just because it exists on a dashboard. It needs stable meaning, reliable updates, and failure behavior that will not create new chaos.
When reporting logic belongs upstream instead of in the BI layer
If reporting logic affects business meaning, reuse, or trust, it usually belongs upstream where it can be reviewed, reused, and kept consistent across reports.
Why freshness matters less than trust in most reporting systems
A slightly delayed metric that people trust is usually more valuable than a real-time metric nobody believes.
Reviewability is a data platform feature
Reviewability is not decoration for data work. It is part of whether a shared platform can change safely once more than one person has to reason about the same models and workflows.