Now accepting guest postsOne editor · Ishant SharmaPitch gets a reply in 72 hours
A guest-post publication
For marketers & agency owners
PITCH INBOX OPEN
EDITED BY ISHANT SHARMA

Agency Dashboard: How We Built Our Internal Performance Tracker

Ishant Sharma

Ishant Sharma

Published : May 1, 2026 at 6:09 am

Updated : May 1, 2026 at 6:09 am

The first page of Google for agency dashboard is a single SaaS tool with the product name “Agency Dashboard,” some Looker Studio templates, and a few directory listings. Every result is built for client-facing reporting. None of them are about the dashboard an agency uses to run its own business. Across our 30-account book, the internal performance tracker is a different document entirely. It tracks per-account margin, team capacity, pipeline health, and retention risk. Year one we ran the practice from a spreadsheet rebuilt manually every Monday. Year two we built six modules into a Notion-plus-Looker stack that surfaces decisions the spreadsheet kept hiding. Maintenance dropped from three hours a week to under one. This is the ledger of what we built, the two metrics that didn’t survive testing, and what changed in how the founder ran the business after the rebuild.

What an internal agency dashboard should actually do

Most pages on the SERP define an agency dashboard as “an interface that consolidates data sources for client reporting.” That definition is correct for a client-facing dashboard and wrong for an internal one. The two documents serve different audiences and surface different decisions.

A client-facing dashboard answers the client’s question: how is my account performing? An internal agency dashboard answers the founder’s question: which decisions does this book of business need from me this week? The first is a reporting artifact. The second is a decision-support system.

The internal dashboard at our shop tracks six categories: per-account margin, team capacity utilization, pipeline health, retention risk, financial KPIs, and senior-strategist time exceptions. Each module is tied to a specific recurring decision. Per-account margin feeds the renegotiation or firing call. Team capacity feeds hiring decisions. Pipeline health feeds outreach intensity. Retention risk feeds which clients get senior re-engagement. The dashboard isn’t comprehensive. It’s deliberately scoped to the decisions a founder running a 30-account book actually has to make every week.

A useful internal dashboard does three things the SaaS tools don’t. It shows margin per account at fully-loaded internal cost, not just retainer revenue. It surfaces team capacity by role, not just total billable hours. And it tags retention-risk signals before churn becomes obvious. Most agencies discover all three patterns three months too late because the SaaS tools don’t track them. We built the tracker because the alternatives didn’t fit how we actually ran the practice.

Why most agency dashboard SaaS tools don’t fit internal use

Three patterns make the dominant SaaS dashboards on the SERP a poor fit for internal performance tracking, and we tried two of them in year one before building our own.

The first is client-facing assumption. Most agency dashboard products are built for the use case where the client is the primary viewer. The metrics, the visual hierarchy, and the data refresh cadence are all calibrated for client-facing reporting. The same tool used internally produces an executive view of work that’s already been done, not the leading-indicator view a founder needs to make decisions about work that’s coming up.

The metric layer the SaaS tools don’t have

The second is the missing-margin layer. None of the leading agency dashboard SaaS tools track per-account net margin out of the box. The data they surface is performance metrics for the client’s campaigns, not economics for the agency’s account book. A founder using one of these tools sees how every campaign is doing, which is exactly the wrong layer for a hiring decision or a pricing renegotiation. The economic layer underneath has to be built separately, and most agencies never get around to building it. The same artifact-versus-pipeline trap that the no-PMs experiment 11 months later on this site argues for staffing applies here. Tracking outputs without tracking economics produces the wrong decisions at scale.

The third is data ingestion limits. Most SaaS dashboards plug into the major ad platforms but don’t ingest from time-tracking tools, accounting tools, or pipeline CRMs. So the internal dashboard becomes a Frankenstein of three or four different vendors stitched together with manual updates. We tried two SaaS dashboard tools in year one. Both required the same Monday-morning manual rebuild we’d been doing in the spreadsheet, just with prettier output. The tools weren’t wrong. They just weren’t built for what we needed them to do.

The six modules in our internal performance tracker

The six modules below shipped in v2 of the internal dashboard, late in year two, after the v1 Notion-only version proved the data was useful but the manual updates were eating capacity. Each module is tied to a specific recurring decision, and each refreshes weekly rather than daily for reasons covered later. The order is the order they get reviewed every Monday morning, in the founder’s standing 25-minute review.

1. Per-account margin tracker. Net margin per account, calculated against fully-loaded senior strategist hourly cost plus analyst time plus tooling allocation. Data inputs: Toggl time tracking by account, Stripe retainer revenue, Notion-based tooling allocation. Threshold flags fire at 35% margin (yellow) and 25% margin (red). Two consecutive quarters in red triggers the renegotiation or firing decision covered in our post on firing clients, among other operational consequences. This module replaced the year-one spreadsheet’s most error-prone calculation. Settings live in a Looker Studio dashboard refreshed weekly.

2. Team capacity utilization heatmap. Hours allocated versus hours available, broken out by role and senior strategist individually. Data inputs: Toggl tracking by team member, against fully-loaded weekly capacity of 32 billable hours per senior. Color coding flags utilization below 65% (yellow, capacity slack) and above 90% (red, burnout risk). Three weeks in red on any individual triggers a workload-reallocation conversation. The heatmap caught two near-burnout patterns in the second half of year two that the year-one spreadsheet had missed entirely.

3. Pipeline health board. Active proposals by stage, dollar value, expected close date, and prospect source. Data inputs: Notion CRM with custom properties for stage, value, and probability. The dashboard slices pipeline by week and shows expected closed-won revenue against a 90-day window. Below $40K of expected revenue in any 90-day window, the founder’s outreach time goes up. Above $80K, the team starts protecting capacity for onboarding. The board is a calibration tool for sales effort, not a forecasting tool. The same pipeline-not-schedule discipline that the editorial pipeline essay on this site argues for content workflows applies to sales pipeline too.

4. Retention risk cohort. Active accounts ranked by leading-indicator signals: weekly email reply rate, Loom completion rate on the first 4 months, dashboard login frequency, and trailing-30-day senior strategist hours per account. Data inputs: Front (email replies), Loom Business analytics, Looker Studio dashboard logs, Toggl. Accounts hitting two or more risk signals get flagged for senior re-engagement that week. The module catches retention drift roughly 6 weeks earlier than the year-one approach, which had relied on the founder noticing problems in 1:1s.

5. Financial KPIs panel. MRR, ARR, gross margin on the practice, net margin, cash runway in months, and a six-month revenue trend. Data inputs: Stripe (revenue), Brex (expenses), QuickBooks (consolidated financials). The panel exists primarily so hiring and investment decisions stop happening on instinct. Hiring conversations get harder when the cash runway shows under 9 months and easier when it shows over 14. The panel doesn’t make the decision. It just makes the math impossible to avoid.

6. Senior strategist time exception alerts. Accounts consuming more than 16 senior hours in any trailing 30-day window, against the 9-to-12 hour healthy floor. Data input: Toggl tagged by senior strategist plus account. The alert is the early warning for accounts heading toward red on the margin tracker. It also catches accounts where senior overload is structural rather than economic, which is a different intervention. About 3 to 4 alerts fire per quarter across the active book, and roughly half resolve through scope renegotiation within 30 days.

The hardest sub-problem, deciding what cadence to refresh and review

The trickiest part of building an internal agency dashboard isn’t the data ingestion. It’s deciding how often the dashboard refreshes and how often the founder reviews it. We made three iterations on cadence in year two before settling on the answer.

V1 cadence was daily refresh and ad-hoc review whenever the founder felt curious. The pattern produced reactive decisions. The founder would notice a one-day spike in a metric, escalate to a senior strategist, the strategist would explain the variance was within the normal weekly range, and the conversation would close without action. Daily refresh produced the false-urgency pattern most operators reading dashboards experience. Weekly review against a daily refresh produced more reactive decisions than either daily review or weekly review on its own.

V2 was weekly refresh with weekly review. Better, but still produced friction because daily metrics on the underlying data didn’t match the weekly aggregate the dashboard showed. Senior strategists checking platform-level data daily would surface issues faster than the dashboard, which created two parallel sources of truth.

V3, the version that stuck, is weekly refresh with weekly founder review on Monday morning, plus alerting on the threshold breaches. The dashboard updates Sunday night for Monday’s review. The thresholds (margin red, capacity red, retention risk amber) trigger a Slack alert mid-week if they fire between Monday reviews. Strategists handle daily platform-level decisions on the underlying account data, not on the aggregate dashboard. The split worked because each layer of the data has its own appropriate cadence.

The lesson is that dashboard cadence is a separate decision from data refresh cadence, and the right answer is usually slower than the data permits.

The data ingestion stack we settled on

Toggl Track for time tracking, with mandatory account-level and role-level tags on every entry. The team trains on the tagging discipline during onboarding. Without consistent tagging, the per-account margin module produces noise rather than signal. Toggl plus the discipline runs about $10 per seat per month.

Notion as the CRM and pipeline source of truth. Custom properties for stage, value, probability, and source. The Notion API pipes to BigQuery via a small Python script that runs every Sunday night. About 4 hours of one-time setup, then zero ongoing maintenance.

Stripe and Brex for revenue and expense data, both via direct API connections to BigQuery. QuickBooks as the accounting source of truth, with monthly reconciliation rather than daily. Front for email engagement signals on the retention-risk cohort, via webhook to BigQuery.

Looker Studio sits on top of BigQuery as the visualization layer. Six dashboard tabs corresponding to the six modules above, plus a summary tab the founder uses for the Monday review. Total tooling cost across the stack runs about $320 a month for the agency, against the value of weekly decisions made on cleaner data than the year-one spreadsheet ever produced. We tried building the visualization layer in a custom Vue.js front-end during v2 and burned 70 hours before reverting to Looker Studio. Custom front-end code on internal dashboards is the most expensive lesson in agency-side tooling.

What actually moved decisions after the dashboard was live

Measured at month 6 post-v2, against the year-one spreadsheet baseline. Founder time on the weekly review dropped from 3 hours to under 1. Decisions made per quarter that traced back to dashboard signals climbed from a handful to 18 to 22 across hiring, firing, renegotiation, retention, and pricing.

The biggest decision-quality improvement was the per-account margin module. Year one we’d fired clients reactively, in response to specific incidents. Year two we fired the four clients covered in the firing-decisions essay on the basis of two consecutive quarters of red margin, which was a structural decision rather than an emotional one. The dashboard made the decision math visible enough that the call became easy.

What changed in week four of v2

The first week we shipped v2, the founder spotted three accounts on the retention-risk cohort that hadn’t shown up as concerning in any of the other reviews. All three had quietly stopped replying to the weekly email. None had complained. None had reduced their retainer. All three were 60 to 90 days from churning if nothing changed. The senior strategist re-engaged on each. Two stayed past month 18. One churned anyway, but with 8 weeks of warning that helped us replace the revenue cleanly.

That kind of leading-indicator catch was the thing the year-one spreadsheet structurally couldn’t surface. Email reply rate as a metric requires daily data ingestion and weekly aggregation across a 30-account book, which is too much manual work to sustain by hand.

What didn’t move decisions as much as expected

The financial KPIs panel is useful but rarely changes a decision the founder hadn’t already made. By the time the runway number lands in a Monday review, the founder usually already knows whether hiring is on the table. The module exists more to anchor conversations with the leadership team than to drive new decisions.

The pipeline health board moved sales effort calibration but didn’t materially change closed-won outcomes. Closing rates are driven more by proposal quality and senior availability than by outreach intensity, and the board mostly confirms what the founder already feels about the quarter.

What moved decisions: per-account margin, team capacity heatmap, retention risk cohort, senior time exceptions. Roughly in that order.

What we thought would work but didn’t

Two metrics shipped into the dashboard in year two and got pulled within five months because the data wasn’t reliable enough to drive decisions.

NPS surveys to clients as a leading indicator

We added quarterly NPS surveys to the retention risk module on the theory that explicit client sentiment would beat behavioral signals. The execution killed the metric. Survey response rate sat at 12% across the book, which was too thin to be statistically useful. The 12% who responded were a self-selected slice that skewed toward either delighted clients or mildly annoyed ones. The neutrals, who turned out to be the highest churn risk, never responded. We pulled NPS in month four and replaced it with email reply rate as a softer but more reliable proxy. Reply rate runs at 78% across the book, which is enough data to spot the leading indicators.

Daily dashboard refresh

We initially shipped v2 with daily data refresh, on the assumption that fresher data would surface issues faster. The pattern produced false-urgency decisions. The founder would react to a one-day spend variance or a single missed reply, escalate to a senior strategist, and the strategist would explain that the variance was inside normal weekly range. The daily-refresh-plus-weekly-review combination produced more reactive decisions than weekly refresh would have. We moved to weekly refresh with threshold alerts in month five, which dropped the false-urgency pattern.

What this internal dashboard cost to build and run

The v1 Notion build took about 18 hours of founder time across two weekends, plus ongoing manual maintenance of roughly 3 hours a week for the first six months while the data structure stabilized.

The v2 rebuild into Looker Studio plus BigQuery took about 70 hours of work distributed across senior strategist time and a contracted analyst engagement. Setup of the Toggl-to-BigQuery, Notion-to-BigQuery, and Stripe-to-BigQuery pipes took about 14 hours total. The custom Vue.js front-end attempt that we abandoned cost another 70 hours we didn’t recoup.

Ongoing tooling cost runs $320 a month for the dashboard infrastructure: $40 for Toggl team seats, $20 for Looker Studio, $90 for the BigQuery storage and compute, $70 for Notion team plan, $80 for Front, $20 for misc. Maintenance time is about 30 to 45 minutes a week, mostly spot-checks on the data ingestion rather than ongoing rebuilds.

Net cost across the year of operating the v2 dashboard was about $3,840 in tooling plus 28 hours of maintenance time. Net savings from the founder time reclaimed against the year-one spreadsheet approach was about 90 hours per year, plus the decision-quality improvements that don’t show up cleanly in cost numbers.

How our shop runs the internal dashboard today

The agency runs paid acquisition for ecommerce and lead-gen brands across the US, UK, UAE, and Australia. The internal performance tracker is a separate document from the client-facing dashboards, with different audiences, different cadences, and different KPIs. The founder reviews it every Monday morning at 25 minutes. Threshold alerts fire mid-week on red margin, red capacity, or retention risk amber-or-worse. Senior strategists run their own account-level dashboards on platform data and don’t usually open the internal tracker. The two layers stay separate because conflating them produced the noisy decisions we built v2 to escape. A related read on how a multi-account practice gets to the size where this kind of tracker becomes load-bearing, growing a PPC agency from 3 to 30 clients without a sales team, covers the demand-side picture.

What to take from this

Most agencies don’t build an internal dashboard because the SaaS tools on the SERP look close enough to what they think they need. The tools are good at client-facing reporting. They’re structurally not built for the founder’s decision support. That gap is why most agencies under 50 accounts run their internal practice from a spreadsheet that gets rebuilt manually each Monday.

The number worth tracking on an internal agency dashboard isn’t a single KPI. It’s the gap between the dashboard’s review cadence and the underlying data’s refresh cadence. Daily data with weekly review produces good decisions. Daily data with daily review produces reactive ones. Weekly data with monthly review produces stale ones. Most agencies that struggle with their internal dashboard have the cadence mismatch wrong.

The gap between the year-one spreadsheet and the year-two Looker Studio dashboard wasn’t a tooling improvement. It was a discipline improvement. The dashboard made decisions visible that the spreadsheet had hidden. Most agencies who think they need better dashboard tooling actually need better discipline on which decisions the dashboard should be feeding. Once that’s defined, the tooling part is straightforward and cheap.

About the author

Ishant Sharma is the founder of Hustle Marketers, a Google Partner and Meta Business Partner agency working with e-commerce and lead-gen brands across the US, UK, UAE, and Australia. Twelve years in performance marketing. Trackable client revenue across the agency’s work has crossed $780 million. Writes from inside a live agency running 30+ client accounts.

Frequently Asked Questions

Scroll to Top