

Ad Budget Allocation
Help the user decide where to spend, cut, or shift ad budget across a funnel without falling for last-touch traps.
--- name: ad-budget-allocation description: Help the user decide where to spend, cut, or shift ad budget across a funnel without falling for last-touch traps. Use this skill whenever a user asks "which campaigns should I kill", "where should I put more money", "which ads are winning", "what's our best ROAS", or wants budget recommendations across Top-of-Funnel (TOF), Middle-of-Funnel (MOF), and Bottom-of-Funnel (BOF) / retargeting. Trigger for D2C brands, performance marketers, and anyone uploading ad spend data, conversion data, or attribution reports. Especially trigger when the data looks like it might be last-touch or single-source, since that is the failure mode this skill exists to catch. Do NOT trigger for creative analysis (use creative-ad-analysis), audience research, or media planning that does not involve budget shifts. --- # Ad Budget Allocation Across the Funnel A skill for giving honest budget advice across an ad funnel — and for refusing to give shortsighted advice when the data does not support it. This skill exists because the most common mistake in performance marketing is killing the campaigns that feed the funnel. A retargeting ad cannot retarget anyone if nothing fills the top. Last-touch data hides this. It credits the final click and treats every step before it as waste. Act on that view and you cut the very ads that were doing the real work. The job here is not to be clever. It is to slow down, check what the data can and cannot show, and answer accordingly. ## When to use this skill Use it when the user wants to: - Decide which campaigns to pause, cut, or scale - Shift budget between TOF, MOF, and BOF - Read a performance report and act on it - Get a recommendation backed by their numbers Do not use it for creative critique, audience targeting, or copywriting. ## The first question, every time Before any recommendation, ask what attribution model the data uses. This is not optional. The whole post turns on this one point: a model that only sees last-touch data will tell you to kill the ads that built the customer's interest in the first place. Ask plainly: - Is this last-click / last-touch attribution? - Is this first-touch? - Is this a multi-touch model (linear, time-decay, position-based, data-driven)? - Is it pulled from one platform's reporting (Meta, Google, TikTok) or from a unified source (GA4, a CDP, an MMM, post-purchase survey)? - What is the lookback window? If the user does not know, help them find out before going further. If they cannot find out, the rest of the analysis has to carry a warning. ## What different attribution sources can and cannot tell you *Single-platform last-click (e.g., Meta Ads Manager only)* - Tells you: which ad got the final click inside that platform's window - Hides: every touch on other platforms, every touch outside the window, every TOF ad whose job was to plant the seed - Safe to act on: pausing clearly broken ads (zero clicks, terrible CTR, no spend efficiency at the ad level) - Not safe to act on: killing entire funnel stages *Last-touch across platforms (e.g., GA4 default)* - Tells you: which channel closed the sale - Hides: assist paths, view-through influence, the role of upper-funnel - Safe to act on: noticing channels that get zero conversions ever - Not safe to act on: judging brand or awareness campaigns *Multi-touch / data-driven attribution* - Tells you: a fuller picture of which touches contributed - Hides: offline word of mouth, organic discovery the model cannot see - Safe to act on: most budget shifts, with caveats - Still imperfect — no model is ground truth *Media Mix Modelling (MMM) or incrementality tests* - Tells you: lift caused by spend, not just correlation - Hides: short-term tactical detail - Safe to act on: strategic budget allocation across channels and stages *Post-purchase survey ("how did you hear about us?")* - Tells you: what the customer remembers - Hides: subconscious touches, things they forgot - Safe to act on: as a sanity check against platform data ## The workflow ### Step 1: Map the funnel Before looking at numbers, ask the user to label each campaign by stage: - TOF: brand awareness, prospecting, cold audiences - MOF: engagement, consideration, warm audiences - BOF: retargeting, abandoned cart, existing customer If campaigns are not labelled this way, work with the user to label them. A spend report with no funnel structure is just a list of numbers. ### Step 2: Check the data shape Confirm: - Attribution model and source (the conversation from earlier) - Time window — short windows punish TOF, long windows flatter it - Whether view-through conversions are included - Whether the report covers all platforms the brand spends on or just one Surface gaps to the user. Do not fill them with assumptions. ### Step 3: Read the numbers, stage by stage Look at each stage on its own terms. Different stages have different jobs and different metrics: - *TOF: judge on reach, CPM, CTR, landing page engagement, branded search lift, new-visitor volume. Do **not* judge TOF on direct conversions or ROAS in a last-touch view. - *MOF*: judge on engagement rate, add-to-cart, time on site, return visits, email signups. - *BOF*: judge on conversion rate, ROAS, CPA. This is where direct-response metrics actually mean something. If the user's data only has conversion metrics for every stage, say so. Tell them they are looking at BOF metrics for non-BOF campaigns and that comparison is unfair to the upper funnel. ### Step 4: Look for the dependency Before recommending any cut to TOF or MOF spend, check what depends on it. Useful questions: - What share of BOF retargeting audiences came from TOF traffic in the last 30–90 days? - If TOF were paused, how long until retargeting pools shrink? - Are there leading indicators (new visitors, branded search, direct traffic) that move with TOF spend? If the user cannot answer these, the safe recommendation is *do not cut TOF based on last-touch alone*. Suggest a holdout test instead — pause TOF in one region or for a defined period, watch what happens to BOF performance one to two weeks later, then decide. ### Step 5: Write the recommendation Structure it like this: *What the data clearly supports* Recommendations you can make with confidence given the attribution model in play. Usually narrow: pause this specific underperforming ad inside a stage, shift budget between two ads doing the same job. *What the data hints at but does not prove* Patterns worth testing, with the test you would run to confirm. *What the data cannot tell you* The blind spots. Name them. If it is last-touch only, say last-touch only. If TOF is being judged on conversions, say so. *What I would not do based on this data* This is the most important section. Spell out the moves that would be tempting but unsafe. "Do not pause TOF brand campaigns based on this report — the report cannot see their contribution to the retargeting pool that is driving your ROAS." ## The hard rule Never recommend cutting an entire funnel stage based on single-source last-touch data. Never. If the user pushes for a recommendation anyway, give them the test design instead — a holdout, a geo-split, a two-week pause in a controlled segment — so they can find out without burning the whole funnel down. This is the lesson from the post. The model in the story was not wrong about the numbers it saw. It was wrong because it acted as if those numbers were the whole picture. The user followed the advice, killed TOF, and watched BOF dry up two days later because there was no one left to retarget. ## Things to avoid - *Do not equate "zero conversions" with "no value"* for TOF or MOF campaigns under last-touch attribution. Zero last-touch conversions is the expected state for a working brand awareness ad. - *Do not aggregate across funnel stages.* "Average ROAS across all campaigns" is a meaningless number when the campaigns have different jobs. - *Do not treat one platform's view as the truth.* Meta thinks Meta did it. Google thinks Google did it. TikTok thinks TikTok did it. They cannot all be right. - *Do not give a confident recommendation when the data is single-source last-touch.* Give a cautious one and name the blind spot. - *Do not skip the attribution question* even if the user is in a hurry. Five minutes of clarification saves a week of wrecked funnel. ## Example output shape *What the data clearly supports* - Pause Ad Set 14 inside the BOF retargeting campaign. CPA is 4x the others in the same stage with comparable spend. This is a within-stage call so attribution bias does not change the answer. *What the data hints at but does not prove* - TOF Campaign B may be more efficient than TOF Campaign A at filling the retargeting pool. To test, pause B for two weeks and watch new-visitor volume and BOF audience size. *What the data cannot tell you* - This report is Meta last-click only. It cannot see Google search lift from your TOF video ads, organic traffic those ads drove, or any view-through influence. Roughly 40% of D2C purchase journeys touch more than one channel — none of that is in this view. *What I would not do based on this data*

Frequently Asked Question
What is Predflow?
Does Predflow work with Meta and Google Ads?
Is there a Predflow Shopify app?
How is Predflow different from Meta Ads Manager or Google Ads dashboard?
Can Predflow detect creative fatigue?
Does Predflow support multi-touch attribution?
How quickly can I get started with Predflow?
Competitive Edge
Simple Attribution
Ad Intelligence
Ad Intelligence
Social Analysis
Social Analysis
Social Analysis
Social Analysis
Social Analysis
Social Analysis
Predflow
Ad intelligence for D2C
502, Synergy Business Park,
Sahakar Wadi, Goregaon-Mulund Link Road, Mumbai - 400063
Quick Links
Legal

