Now accepting guest postsOne editor · Ishant SharmaPitch gets a reply in 72 hours
A guest-post publication
For marketers & agency owners
PITCH INBOX OPEN
EDITED BY ISHANT SHARMA

PPC Audit: How We Fixed a $30K/Month Google Ads Account That Wasn’t Converting

Ishant Sharma

Ishant Sharma

Published : May 7, 2026 at 8:00 pm

Updated : May 1, 2026 at 2:49 am

Most PPC audit content on the SERP is a generic 10-step checklist applied to no specific account. The audit reads as a tour of best practices, not as a teardown. Six months ago, a B2B managed-IT services firm in the Pacific Northwest brought us a $30K/month Google Ads account that had stopped converting. Lead volume had crumbled from 17 a week to 4. CPL had climbed from $740 to $1,820. The account had been with their previous agency for 14 months, and the dashboard still reported a green ROAS. We ran a structured PPC audit, surfaced 22 findings, prioritized 7 of them, and drove lead volume back to 14 a week at a $620 CPL inside 90 days. This is the ledger of what the audit found, the fix order, and the two findings we expected to matter that didn’t.

What is a PPC audit, in operator terms?

Most PPC audit definitions describe it as a review of campaign performance. That framing is technically correct and operationally useless. A real audit is a systematic teardown of every input the bidding algorithm and the tracking layer rely on, with the goal of proving where the account is leaking money before any optimization gets shipped.

The teardown runs across four layers. The measurement layer (conversion actions, attribution windows, GTM containers, GA4 imports). The structure layer (campaign types, ad group themes, keyword match types, negative keyword lists). The creative layer (RSAs, landing page match, ad-copy-to-page-relevance). And the targeting layer (geo, device, audience, dayparting). A finding from any of the four can corrupt every metric below it, which is why fixing campaigns before fixing measurement almost always wastes the next 60 days of spend.

In the account we audited, conversion tracking was the load-bearing problem. Form-fill conversions were firing on form-load instead of submit. So the bidder had been optimizing for impressions of the form for 9 months. The reported conversion volume looked healthy, the actual lead volume had collapsed, and nobody on the prior agency had checked the difference. A PPC audit that doesn’t open the GTM container in the first hour is a vanity exercise.

Why most PPC audits fail to surface the actual problem

Three default audit patterns cause the failure, and all three showed up in the documented “audits” the prior agency had produced for this client.

The first is auditing dashboards instead of containers. Most audit checklists tell you to “review conversion tracking.” In practice that means glancing at the conversions panel inside Google Ads, not opening Tag Manager and validating each tag fires on the right event. The two checks produce different answers. In this account, the Google Ads dashboard reported 64 conversions in the trailing 30 days. The Salesforce CRM had recorded 11 actual leads. The dashboard was firing on form-load, not submit, so 53 of the 64 were impressions of the form.

The 10-step checklist illusion

The second is treating the audit as a checklist instead of a sequenced diagnosis. Generic 10-step audit posts list every check you might run as if they’re independent. They aren’t. Conversion tracking accuracy decides what every other metric means. So the diagnosis has to start there, not at step 4 after “review keyword performance.” We’ve seen agencies optimize match types and bid strategies for weeks against a broken signal. The same algorithm-trust pattern shows up in the essay on Meta’s Advantage+ eating creative teams on a different platform.

The third is auditing without a measurement window. Most audit deliverables show a finding (“you have 38% wasted spend”) without naming the time period. A 7-day window catches different issues than a 90-day window. The structured PPC audit needs a fixed lookback, usually 90 days for stable accounts and 30 days for accounts that have recently changed structure. The lookback gets named upfront so the findings are comparable across time. When we ran this audit, we used the trailing 90 days against the prior 90, plus a 12-month seasonal overlay. That’s how the conversion-rate collapse became visible. A shorter window would have shown a “soft month.”

The seven findings that produced 80% of the lift

The full audit surfaced 22 findings. Most of them mattered some, but 7 produced roughly 80% of the conversion lift over 90 days. The order below is the order we shipped fixes in. The dependency chain matters. Conversion tracking has to be clean before any other change is measurable, so every audit we run starts there and stays there until the data flows correctly.

1. Conversion tracking firing on form-load instead of submit. Found inside the GTM container in the first hour of the audit. The form-fill tag was set to fire on the page-view of the thank-you URL pattern, but the form posted to the same URL with a query string before the user actually completed it. We rebuilt the trigger to fire on a custom dataLayer event named form_submit_success, pushed by the form’s success callback. Reported conversions dropped 78% in week one. Real lead volume was unchanged. The bidder finally had honest signal to learn from.

2. Performance Max cannibalizing branded Search. PMax was bidding into branded queries at $3.40 CPC, while the dedicated brand Search campaign was winning the same auctions at $0.90 when allowed to. About 22% of total monthly spend was getting absorbed by PMax on terms Search would have won at a fraction. We added the brand-exclusion list inside the PMax campaign settings, under Account-level brand exclusions. Branded CPC dropped from $3.40 to $0.85 within 14 days, and roughly $6,400 of monthly budget moved into prospecting where it could acquire new leads.

3. Broad match without negative keyword scaffolding. The account had 47 broad match keywords with a shared negative list of just 12 terms. The 90-day search query report showed 38% of total spend going to queries no managed-IT firm would intentionally bid on. Examples included “free IT support” and “how to fix Outlook.” We rebuilt the shared negative list with 380 single-word negatives covering free, DIY, tutorial, and consumer-product terms. Wasted spend dropped from 38% to 11% by week six.

4. Quality Scores of 4 to 5 because landing pages did not match ad copy. Five of the seven primary ad groups were sending traffic to a generic services page. The ads promised “managed IT for medical practices in Seattle” but the landing page was titled “IT Services for Businesses.” We built four geo and vertical-specific landing pages, mirrored the ad copy to the H1, and added trust signals matching the ad’s promise. Quality Score climbed to 7 to 8 across the rebuilt ad groups in 21 days. Conversion rate on those ad groups doubled.

5. Click-through-conversion window set to 7 days when leads averaged 12 days to convert. The B2B sales cycle on managed IT averaged 12 days from form submission to qualified lead in the CRM. The bidder’s 7-day window was missing 35% of conversions in attribution. We extended the window to 30 days inside the conversion settings and set up offline conversion import from the CRM through a weekly upload. The signal stabilized over four weeks. Smart Bidding finally started biasing toward keywords that produced qualified leads, not just form-fills.

6. Geo targeting set to state level on a 4-metro service area. The client served Seattle, Portland, Boise, and Spokane only, but the campaigns targeted Washington, Oregon, and Idaho whole-state. Roughly 18% of clicks came from rural areas the firm couldn’t profitably serve. We rebuilt geo targeting at the metro level with a 30-mile radius around each, plus bid adjustments per metro. Cost per lead in the 4 served metros dropped 24% within 30 days because the bidder stopped spending budget on unserved zip codes.

7. No device bid adjustments on an account where mobile converted at half the rate of desktop. Mobile clicks were 41% of total clicks but only 19% of conversions. The desktop bid was inadvertently subsidizing mobile traffic at a 2-to-1 efficiency penalty. We applied a -35% mobile bid adjustment, monitored for two weeks, then dialed it to -25% once volume on desktop stabilized. CPL dropped another 12% across the account. The same pattern shows up on most B2B lead-gen accounts because B2B buyers tend to research and convert on desktop. The pipeline-versus-calendar logic that the editorial pipeline essay on this site argues for in a different domain applies here. Cadence and channel weighting beat schedule.

The hardest sub-problem, deciding which findings to fix first

The trickiest part of any PPC audit isn’t surfacing the findings. It’s the prioritization decision after the audit document is finished. A 22-item findings list applied in random order can take six months to ship and produces noisy attribution along the way. The order matters more than the items.

The rule we use across audits is that measurement layer fixes ship first, structure layer second, creative third, targeting last. The reason is dependency. If conversion tracking is wrong, no campaign-structure fix is measurable, because the bidder is learning from a corrupted signal. If structure is wrong, no creative fix is measurable, because the bidder is comparing apples to apples that aren’t apples. If creative is wrong, no targeting fix is measurable, because Quality Score will swamp the bid adjustments.

In this account, that meant week one was conversion tracking only. No campaign changes. No keyword adjustments. Just GTM rebuild and a 14-day stabilization period to let Smart Bidding retrain. The client wanted us to “do something visible” in week one. We pushed back. Week two through six was structure, including the negative keyword cleanup, the PMax brand exclusions, and the campaign type rebuilds. Weeks seven through 10 were creative, the new landing pages and the rebuilt RSAs. Weeks 11 through 13 were targeting, geo and device adjustments.

The same audit, sequenced differently, would have produced 60% of the conversion lift in twice the time. Sequencing isn’t a nice-to-have on a structured PPC audit. It’s the audit’s only real product.

The tooling stack we ran the audit on

Google Ads Editor for the structural download and bulk export. Google Tag Manager and the GTM Preview Mode for measurement layer validation. Google Analytics 4 cross-referenced against the Salesforce CRM for the conversion-attribution gap. The Search Query Report exported to a Google Sheet with a SQL pivot using BigQuery’s GA4 export, which let us tag every query with intent score and category in 20 minutes instead of three hours.

Total audit-stage tooling was either free or already paid for inside the agency’s existing stack. We do not use Optmyzr, Semrush, or any of the SaaS PPC audit tools that the SERP wants to sell. The reason is simple. Those tools surface common findings well, but they miss measurement-layer issues entirely because they only read the Google Ads API and GA4 surface. The form-load conversion tracking issue that drove this account’s collapse was invisible to every PPC audit tool we’ve benchmarked.

The audit doc itself was a Notion page, structured in the four layers above. Total document length came in at 4,800 words. The client got the version with the prioritized fix list. We kept the longer version for internal reference.

What actually moved the conversion lift

Measured at month three against the month-zero baseline. Lead volume went from 4 a week to 14 a week. CPL went from $1,820 to $620. Qualified-lead-to-customer rate, measured against the CRM, climbed from 8% to 19% because the bidder was now pulling higher-intent traffic.

The biggest single lift came from conversion tracking

The biggest lift came from the conversion tracking rebuild. Not because it created leads directly, but because it stopped Smart Bidding from optimizing against a phantom signal. In the 30 days after the GTM rebuild, lead volume climbed from 4 a week to 9 a week with no other changes deployed. The bidder finally had honest data to learn from, and it reallocated budget within four weeks.

The second biggest lift came from negative keyword cleanup combined with match type discipline. Together they cut wasted spend from 38% to 11% of monthly budget. The reclaimed roughly $8,100 a month moved into queries that actually closed.

Mid-tier lifts and what produced almost nothing

Landing page rebuilds drove an average 41% lift in conversion rate on the five ad groups that got new pages. Geo targeting at the metro level dropped CPL another 24% inside the served area. Device bid adjustments dropped CPL 12% on top of that.

Two findings produced almost no measurable lift. Ad scheduling adjustments around business hours were neutral. The dayparting hypothesis was that pausing ads outside business hours would protect lead quality, but the change made no difference because the bidder had already learned the conversion pattern from the cleaned signal. The other neutral change was switching ad rotation from “optimize” to “rotate evenly” on a test ad group. The “rotate evenly” setting produced a small CTR drop and no conversion lift, so we reverted within two weeks.

A real PPC audit produces findings that range from load-bearing to neutral. Most SERP listicles imply every finding produces lift. They don’t.

What we thought would work but didn’t

Two changes shipped in month two on the assumption they’d add meaningful lift, and both produced statistically negligible movement.

Adding 30+ ad extensions we hadn’t been using

The audit surfaced that the account was using only 4 of the available extension types. Our hypothesis was that loading every relevant extension (callouts, structured snippets, sitelinks at every level, location, call, lead form, image, promotion, price) would lift CTR meaningfully. After three weeks of running the expanded set, CTR moved from 4.8% to 5.1%. Inside attribution noise. Conversion rate didn’t move at all. Some extensions never showed because Google’s auction logic suppressed them anyway. We pulled back to the 7 extensions that actually got served, and the rest stayed off.

Switching ad rotation to “rotate evenly” for testing

The reasoning was that “optimize” rotation would prevent us from seeing the true performance differences between RSAs. We switched to rotate evenly on the highest-volume ad group for two weeks, then planned to use the cleaner data to rebuild the RSAs. CTR dropped 9%. Conversion rate dropped 14%. The data we wanted to collect arrived noisier than the optimize setting had been producing for free. We reverted in week three. Smart Bidding’s optimize-for-conversions rotation does what it claims, even when it’s annoying for testing.

What this PPC audit actually cost to run

The audit phase took 14 hours of senior strategist time spread across week one. Plus 6 hours of analyst support pulling the search query report, the conversion data exports, and the CRM cross-reference. Plus 3 hours from the GTM specialist validating tags. So roughly 23 hours of agency time before any fixes shipped.

The 90-day fix-and-stabilization phase took roughly 110 additional hours across the team. Senior strategist 45 hours, analyst 30 hours, GTM specialist 15 hours, copywriter 12 hours for landing pages and RSAs, ops 8 hours for reporting and weekly stand-ups. Total agency labor across the engagement was about 133 hours.

Tooling cost was minimal. Google Ads Editor and GTM are free. The BigQuery GA4 export ran $19 in storage and compute across the 90 days. The new landing pages were built on the client’s existing CMS, so no incremental hosting cost. Total client-side incremental cost was under $200.

The math: 133 hours of agency time, $200 of incremental tooling, against an account that recovered from 4 leads a week to 14, at a CPL drop from $1,820 to $620. The audit and fix paid back inside the first 6 weeks of the corrected lead flow. Most PPC audit engagements at this account size pay back inside 8 weeks if measurement is the leading issue, which it usually is.

How our shop runs PPC audits today

The agency runs paid acquisition for ecommerce and lead-gen brands, mostly in the US, UK, and Australia. Account audits are the standard entry point for new engagements. Most accounts come to us with broken or noisy conversion tracking, over-broad geo targeting, and Performance Max competing with Search on branded queries. The fix order is consistent: measurement first, structure second, creative third, targeting last. We default to a 90-day measurement window for stable accounts and a 30-day window for accounts that recently changed structure. A related read on the agency-build side, growing a PPC agency from 3 to 30 clients without a sales team, covers how this kind of audit-first practice gets built.

What to take from this

If you’re staring at a PPC audit deliverable that lists 22 findings without a fix order, the deliverable is a checklist, not an audit. The audit’s actual product is the dependency-aware sequence, not the list. The order is what determines whether the next 90 days of spend produces a recovery or another quarter of the same drift.

Three quick tests for whether an audit is real or a checklist. First, did the auditor open the GTM container and validate each conversion tag fires on the right event, or did they only read the Google Ads dashboard. Second, did the audit name a measurement window for every finding, or did it report percentages without the lookback period. Third, did the audit prioritize findings by dependency chain, or did it list them in the order the auditor noticed them.

Most accounts we inherit failed at least two of these three tests with their prior agency’s audit. The PPC audit that recovered this $30K/month account didn’t find anything exotic. It just sequenced the basics correctly and let the bidder retrain on clean signal.

About the author

Ishant Sharma is the founder of Hustle Marketers, a Google Partner and Meta Business Partner agency working with e-commerce and lead-gen brands across the US, UK, UAE, and Australia. Twelve years in performance marketing. Trackable client revenue across the agency’s work has crossed $780 million. Writes from inside a live agency running 30+ client accounts.

Frequently Asked Questions

Scroll to Top