Summarize this article with:
Most Google Ads case studies are unfalsifiable. They show the before number, the after number, and a logo. They skip the part where eight different conversion actions were firing on the same form, where Performance Max was eating 60% of the budget on branded searches, and where the location extensions were misaligned with the service area boundaries. Six months ago we took over a regional driving school account spending $11,400 a month on Google. Three locations, fourteen service-area suburbs, lead-gen via call and form. Lead volume climbed 280% and CPL dropped 40% by month six. This is the ledger from that engagement, the seven changes that did the work, and the two we expected to lift performance and didn’t.
What is a useful Google Ads case study, in operator terms?
A useful one is reproducible. It tells another practitioner what was wrong, what changed, in what order, and over what measurement window. It includes the conversion definitions, the spend mix, and the geo footprint. It includes the changes that didn’t work, because the absence of failure is the giveaway that the writeup is marketing.
This account had real attribution problems before we touched it. Phone calls were tracked through a third-party widget that fired on every page load, not on call duration. Form fills counted as conversions whether the user submitted contact info or just clicked the start button. Eight conversion actions were firing across the account, and three of them were duplicates of the same lead.
Before any optimization made sense, we rebuilt conversion tracking. Stripped to four actions: 60-second call, completed form submission, completed booking-page visit, and a phone-click on mobile only. That single cleanup made the rest of the work measurable. Anything claimed about lift in the first 30 days against the old setup wouldn’t have been a useful Google Ads case study. It would have been a story.
Why most lead-gen accounts under $20K/month leak budget by default
Three structural defaults cause the leak.
The first is broad match plus Smart Bidding sitting on a thin conversion signal. Google’s bidder is only as good as the data it learns from. With duplicated and inflated conversion actions in place, Smart Bidding was buying clicks that looked like conversions but weren’t. The 90-day search query report showed 41% of spend going to queries no driving school would intentionally bid on. “Free driving lessons online”, “DMV practice test”, “how to drive stick shift”. Smart Bidding wasn’t broken. The signal it was learning from was.
The second is Performance Max sitting next to Search without exclusions. PMax was capturing branded queries that Search would have won at a quarter of the CPC. Brand traffic accounted for roughly 18% of conversions but 32% of total spend, which made the headline ROAS look acceptable while the prospecting half of the account starved.
The third is location extension and service-area misalignment. Both were pointed at the head-office address. The school served fourteen suburbs, three of them across a different metro boundary, and the radius targeting was set at 25 miles from head office, which excluded two of the highest-intent suburbs entirely.
Each one is a five-minute audit finding any honest Google Ads case study should disclose. Together they explained roughly 60% of the wasted spend in the first month.
The seven changes that produced the lift
What we did, in the order we did it, with the numbers each change produced over the six-month window. The changes are not all equal. The ranked-impact section further down shows which did most of the work.
1. Conversion tracking rebuild. Stripped eight conversion actions to four, with primary set to 60-second calls plus completed forms. Reported conversion volume dropped 38% in week one and the rest of the optimization became measurable. Without this, every later change would have been chasing a phantom signal.
2. Search query report cleanup with hard negative lists. Built an account-level shared negative list with 412 single-word negatives covering free, online, license-replacement, theory-test, and DMV-practice queries. Pulled negatives twice in the first month, then weekly for two months. Wasted spend dropped from 41% to under 12% by week six.
3. Match type discipline. Pulled all broad match keywords on Search. Rebuilt with phrase and exact only on the head terms. CPC on head terms dropped from $7.40 to $5.10 within three weeks because Google stopped serving the ads against unrelated intent.
4. Geo segmentation by service area, not radius. Replaced the 25-mile radius with location targeting on the fourteen specific suburbs. Built separate ad groups per cluster of three to four suburbs with localized RSAs. Conversion rate jumped from 4.2% to 7.1% within 30 days because ads named the suburb in the headline.
5. Performance Max separation from brand. Excluded all branded queries from PMax via the brand exclusion list. Forced brand search through a dedicated Search campaign with manual CPC. Prospecting share of budget climbed from 47% to 71% without lowering total volume. Another essay here on how Meta’s Advantage+ is reshaping creative teams makes the same point about platform algorithms in a different ecosystem. The bidder rewards good inputs, not more campaigns.
6. Mobile-first landing page swap. The existing page had a 14-field form and a 1.4MB hero video that auto-played. Mobile bounce sat at 76%. Swapped to a 4-field form with click-to-call above the fold. Mobile bounce dropped to 41% in two weeks.
7. Call extension and call tracking refit. Wired the call extension number through a tracking number that recorded duration. Turned off the third-party widget. Set the conversion threshold at 60+ seconds. Lead quality, measured by booking rate downstream, climbed from 22% to 39% over the next two months.
What about the conversion lag between the click and the booking?
Driving lessons aren’t impulse purchases. The average click-to-booking window in this account ran 4 to 9 days, which made standard click-attribution windows misleading. The first conversion volume report at week three looked like the cleanup had killed performance.
Two things fixed the read. We extended the click-conversion window to 30 days inside the Smart Bidding settings, and we set up offline conversion import from the booking system. Bookings made in week 4 from clicks in week 1 finally counted in the right column.
The offline import was 90% of the value. It told Smart Bidding which click sources actually produced paying students, not just enquiries. Within six weeks, the algorithm started biasing budget toward keywords that converted to bookings, not keywords that converted to enquiry forms only. Booking-to-enquiry ratio climbed from 22% to 39% over two months without any change to the ad copy. The lesson goes against most lead-gen Google Ads case study writeups: optimizing for the proximate conversion misses the actual signal. The conversion that matters is the one that pays.
What we left untouched in the original account
Two things we deliberately did not change.
First, the ad copy on the existing top-performing RSAs. Three running RSAs had Quality Scores of 8 or 9 and CTR over 6.4%. Rewriting them would have reset the learning data. We added new RSAs with localized headlines per suburb but left the originals running in parallel for the first three months. Two eventually got paused once the localized variants outperformed them. One is still running at month seven, still at Quality Score 9.
Second, the campaign budget caps. The temptation when fixing a leaky account is to expand the budget once the leaks are sealed. We held daily budget at $380 across all campaigns for the full first six months, even when efficiency clearly justified more spend. The reason was simple. We didn’t trust the new conversion data until it had stabilized through 12 weeks of bookings, and scaling spend on a still-noisy signal is how agencies turn a 280% lift into a 40% retraction at month nine. The hold paid off. The agency is now scaling daily budget by roughly 18% a month, slowly enough that the algorithm keeps converting at the new CPL.
What actually moved the lead volume
Measured at month six against the month-zero baseline. Lead volume defined as 60+ second calls plus completed form submissions, weighted equally.
The biggest single lift came from the conversion tracking rebuild. Not because it created leads, but because it stopped Smart Bidding from chasing the wrong signal. In the 30 days after the cleanup, lead volume climbed 84% with no other changes deployed yet.
The second biggest lift came from match type discipline plus the negative keyword list. Together they cut wasted spend from 41% to 12% of total budget, and the reclaimed budget moved into queries that actually closed. This change accounted for roughly 70 incremental leads a month.
Geo segmentation by suburb was the third. Conversion rate climbed from 4.2% to 7.1% within 30 days because ads referenced the city in the headline. Roughly 35 incremental leads a month came from this change alone.
The mobile landing page swap drove a smaller volume lift but a meaningful CPA lift. Mobile bounce dropped from 76% to 41%, and the mobile share of conversions climbed from 38% to 56%. Mobile CPA dropped 22%.
Performance Max separation produced negligible direct lift but reclaimed roughly 24% of total spend that was being eaten by branded queries. That reclaimed spend is what funded the prospecting expansion in months 5 and 6.
Call extension refit produced no additional volume but cut junk calls by an estimated 60%. The booking team’s time per inbound call dropped, which downstream of the campaign matters more than the ads team usually admits.
The ranked-impact section is what separates a real Google Ads case study from a marketing brochure.
What we thought would work but didn’t
Two changes ate budget and produced no measurable lift.
A four-week test on dynamic search ads (DSAs) targeting the existing service pages produced 11 conversions at a CPA 2.4x the account average. The DSA ad text was generic, the landing-page experience was inconsistent, and the queries DSA pulled were too broad to convert at the same rate as exact-match search. We paused at week five. The lesson was not that DSAs are bad. The lesson was that DSAs need a tightly-scoped page set and aggressive negative-keyword discipline, neither of which the account had time to build inside a six-month engagement.
A three-week YouTube prospecting test at the metro level, running a 30-second testimonial creative, produced a thousand cheap views and zero attributable leads. The view-through-conversion column lit up. The actual booking system showed no new prospects. View-through is not a lead-gen signal in this vertical at this account size, and the spend would have been better routed to additional Search budget on the suburbs converting at 7%+. Killed at three weeks. Including failures is what separates honest reporting from a sales pitch.
How our shop works with local lead-gen accounts
The agency runs paid acquisition for ecommerce and lead-gen brands, and roughly a quarter of the lead-gen book is local service businesses like driving schools, home services, and clinics. The repeating pattern across those accounts is the same shape that showed up here. Conversion tracking is broken in some way, geo targeting is over-broad, and Performance Max is silently eating brand traffic that Search would have won cheaper. The fix is rarely a new tactic. It’s an audit, an honest baseline, and the discipline to leave the rebuilt setup running long enough for Smart Bidding to learn the new signal. The way this Google Ads case study repeats across other accounts is what makes it useful past one engagement.
What to take from this
If you’re reading this looking for a single tactic that produced a 280% lift, there isn’t one. The lift came from a sequence. Conversion tracking first, then negative keywords, then match type discipline, then geo precision, then PMax separation, then the landing page, then call quality. Skip the order, lose the math.
A related read on this site, programmatic SEO that prints money versus the kind that poisons the well, makes a related point on a different channel: scaled tactics work when the foundation is right and bleed when it isn’t. The Google Ads case study you’re reading is the same lesson, applied to paid search.
Six months. 280% more leads. 40% lower CPL. Seven changes that did the work and two that didn’t. That’s the ledger.
About the author
Ishant Sharma is the founder of Hustle Marketers, a Google Partner and Meta Business Partner agency working with e-commerce and lead-gen brands across the US, UK, UAE, and Australia. Twelve years in performance marketing. Trackable client revenue across the agency’s work has crossed $780 million. Writes from inside a live agency running 30+ client accounts.
