Hey everyone,
It’s Ash here and I’m taking over today to let you know that my new podcast Ad Spend has just launched.
The goal is simple: sit down with the best media buyers in the game and pull out the exact strategies they're using to scale brands right now.
For our first episode, we brought on Marin Istvanic, Senior Media Buyer at Inspire Brands and one of the smartest operators I know.
He's managed over $150 million in Meta ad spend and built multiple brands from scratch using the exact framework we're breaking down today.
Here's the reality: Meta feels harder in 2025.
The Andromeda algorithm changed how the platform optimizes. Health and wellness brands are getting hammered with restrictions. And even if you're not in a restricted category, you're probably seeing one of two things:
Flat year-over-year revenue
More revenue but way less profit
Marin put it perfectly: "Iterations don't work like they used to. Everything just feels less efficient. We're paying more for the click, and that click is converting less."
So what do you do?
You adapt. You test smarter. You build a system that actually scales.
That's exactly what Marin walked us through. His full testing structure, scaling strategy, creative process, offer framework, and the tools that make it all work.
We’re covering his full strategy today.
Let's get into it.
On the Menu:
The Testing Structure - ABO, Cadence & Scaling
Exclusions, Attribution & ASC - The Setup That Scales Winners
Creative Strategy - Finding Angles That Actually Work
Offers & Landing Pages - The Hidden Leverage Points
Not All CTV Platforms Are Created Equal
We're running our first CTV tests at Obvi right now, and the due diligence process opened my eyes. The CTV space is crowded… who you choose matters way more than most marketers realize.
If you've been anywhere near the TV/CTV world this year, you've probably noticed something: there are suddenly hundreds of platforms promising to "unlock CTV" for your brand.
Here's the truth no one really says out loud: They are not all doing the same thing.
Most platforms today only offer programmatic CTV, which is fine in theory, but comes with tradeoffs: less transparency into where your ads actually run, a higher risk of low-quality supply, and measurement that doesn't always hold up when real dollars are on the line.
As TV becomes a bigger part of the 2026 playbook, it's worth choosing a partner that gives you more than one lane. The strongest operators I know look for three things:
Access across all of TV - not just programmatic CTV, but direct CTV publisher inventory and linear
Clear transparency - real visibility into where ads land and what inventory you're actually buying
Real measurement - incrementality, outcomes, reporting that matters
That’s why brands like Tecovas, Ridge Wallet, and Calm rely on Tatari to give them the control and clarity needed to treat TV like a performance channel.
If you’re building your 2026 plan or already running TV and want a real gut check, this is the time to validate what’s working and what isn’t. Tatari gives you the clarity to know exactly where TV is driving results.
→ Get started or request an audit at Tatari.tv
P.S: If you’re seeing offers from other providers that seem too good to be true (guaranteed results, giant ad credits, free creative) that’s because they usually are. Growth is never free.
The Testing Structure - ABO, Cadence & Scaling
Most brands overcomplicate Meta testing.
They're running CBO campaigns with 15 different concepts, letting Facebook decide what gets budget, and wondering why their winners never scale.
Marin's approach is different and it's built on one core principle: control.
Why ABO Over CBO
Here's the problem with CBO: Facebook spends based on engagement metrics, not performance.
It'll dump budget into the ad with the highest thumb stop ratio or click-through rate, even if that ad isn't actually converting.
Marin uses ABO (Adset Budget Optimization) because it lets him scale each individual concept independently.
"Sometimes I get from $100 to $5K in a single adset over the course of two weeks," he told us.
"When I do that with five or six adsets, I'm scaling right away—even in the testing."
But here's the catch: ABO only works if you have a decent hit rate.
If you're getting 3 out of 10 winners, the winners pay for the losers. If you're only hitting on 1 out of 10, you're going to burn cash fast.
Marin's rule: You need at least a 30% hit rate for ABO to be worth it.
The Testing Structure
Each adset = one concept.
Each concept = 3-5 variations.
For example:
Adset 1: Four anti-aging videos (same angle, different creators)
Adset 2: Three "us vs. them" images (same hook, different visuals)
Every adset starts at $100/day.
And here's where the cadence matters.
Wednesday Launches & Weekend Scaling
Marin launches all tests on Wednesday because performance is better on weekends, and he wants his tests running through Saturday and Sunday.
"If I launch on a weekend, I only get two days of decent performance before Monday and Tuesday tank it,".
Here's the evaluation schedule:
Wednesday - Launch tests at $100/day
Friday - First check (2.5 days of data)
Weekend - Let them run
Monday - Final evaluation
How Marin Scales Winners Fast
If something's working on Friday, he doesn't wait.
He scales the budget immediately using this ladder:
$100 → $200 → $350 → $550 → $750 → $1,000
Each bump happens every 1-3 days, depending on performance.
"I can get from $100 to $1K in a week and a half," Marin said.
By Monday, he's split his tests into three buckets:
1/3 scaled (the winners)
1/3 killed (the obvious losers)
1/3 still running (the maybes, either same budget or one variation killed)
Then Wednesday hits, and the cycle repeats with a fresh batch of tests.
Exclusions, Attribution & Scaling with ASC
Once you've found winners in testing, the next question is: how do you scale them without killing performance?
This is where most brands fumble. They either don't exclude properly, use the wrong attribution window, or throw everything into one campaign and hope for the best.
Marin's scaling system fixes all three.
The Exclusion Strategy
90% of Marin's clients care about new customer acquisition.
So he excludes aggressively:
Full purchase list - 180 days from Klaviyo + Pixel
Website visitors - 180 days (only for organically-built brands)
"For brands built organically, you get so many view conversions, I don't want Facebook claiming sales it's not responsible for." - Marin
The View-Through Conversion Warning Sign
Here's the benchmark:
20-25% view conversions = Normal
30%+ view conversions = Test 7-day click only
80%+ view conversions = Facebook is stealing credit
If you're seeing high view-through conversion rates, you're not scaling; you're just paying Facebook to take credit for organic sales.
Attribution Settings
Marin uses 7-day click, 1-day view as his default because 90% of his clients use third-party tracking (Triple Whale, Northbeam), so he's not making decisions based on Facebook data alone.
The exception: Images and mashups.
"I know they do retargeting," he said. "So I test them with 7-day click only to prevent Facebook from over-indexing on warm traffic."
Moving Winners to ASC (Advantage+ Shopping)
Here's where the magic happens.
Once a test proves itself, Marin takes the post ID and moves it into an Advantage+ Shopping campaign with cost cap.
"On Tuesday, the campaign might spend $5K. On Thursday, $7K. On Saturday, $19K and I'm not touching anything," Marin explained.
Cost cap lets Facebook chase demand when it's there (weekends) and pull back when it's not (weekdays).
Campaign Structure for ASC
Marin runs one ASC campaign per product because different products have different AOVs.
If Product A has a $100 AOV and Product B has a $200 AOV, they need different CPA targets. Mixing them in one campaign with a single cost cap kills profitability.
He starts by importing 5-10 winners into ASC, then feeds it new ads weekly.
Creative Strategy - Finding Angles That Actually Scale
Here's the truth: Ads are just amplifiers.
"With technical media buying, I can improve your results by 20-30% max," Marin said.
"Everything else depends on the offer and creatives."
So how do you find angles that actually work?
Mining Angles from Customers
Marin starts with two tools:
Atria - Upload your customer list and it maps out which angles people are buying from
Post engagement - Analyzes Facebook ad comments and categorizes them by angle
"You'll find people telling you their whole story in the comments, you can repurpose that and run it as an ad."
The Creative Progression
Once you find a winning angle, here's the sequence:
Static ads → Test the angle
UGC → If static works, move to creator-led content
Influencer → If UGC works, create a partnership with an influencer
Dedicated landing page → Build a page specific to that angle
You want everything to be congruent; the angle in the video should match the angle used on the landing page.
Why Images Do Retargeting (Even Good Ones)
Marin had one ASC campaign where a video spent $100K at 1.1 frequency. An image in the same campaign spent $10K at 3.0 frequency.
"That image is only performing because it's bottom-of-funnel, eating what the video generated."
For health and wellness brands, focus on video. Images work for self-explanatory products (apparel, jewelry, toys), but problem-solving products need videos to build awareness.
Format Diversity
Mix it up:
✅ Scripted vs. freestyle
✅ Different creators with the same script
✅ Market awareness levels (problem-aware vs. solution-aware)
"You're not going to scale on one angle and one creative," Marin said. "You need different bites of the pie to unlock different audiences."
Offers & Landing Pages - The Hidden Leverage Points
After creative, your offer has the biggest impact on performance.
Marin saw this firsthand with multiple clients: "Hit rate was 20%, which is decent. Then we changed the offer and hit rate went to 45%."
He even relaunched old tests that had failed and they worked with the new offer.
What's Working Now
Free gifts with high perceived value. Something that's expensive on your site that makes people think "wow, this is a no-brainer."
Price anchoring - Make the middle option look terrible so the premium option feels like a steal.
"When you have one-month, two-month, and three-month options, prime them to buy three months."
Track profit per visitor to determine winning offers, not just conversion rate or AOV.
Landing Page Strategy
Match the page to the product type:
Self-explanatory products (apparel, jewelry) → PDP
Problem-solving products (supplements, skincare) → Advertorial or hybrid sales page
But track your click-through rate. You'll lose 60-70% of traffic on an advertorial, so you need to make it up with conversion rate.
Pro tip: Hybrid sales pages work well.
It’s a mix between an advertorial and a PDP, so it keeps viewers engaged.
Summary
Here's the framework:
Testing - Launch on Wednesday with ABO. Start at $100/day per adset. Scale winners fast ($100 → $1K in 10 days). Kill losers by Friday based on soft metrics.
Exclusions - Full 180-day purchase list. Watch view conversions (20-25% is normal, 80% is a red flag). Use 7-day click for testing.
Scaling - Move winners to ASC with cost cap. One campaign per product. Feed it new ads weekly. Let weekend demand drive spend.
Creative - Mine angles from customer reviews and ad comments. Progress: Static → UGC → Influencer → Dedicated landing page. Focus on video for problem-solving products.
Offers - Test free gifts with high perceived value and price anchoring. Match landing pages to product type. Track profit per visitor. Always be testing.
The Biggest Takeaway
You don't need to reinvent the wheel. You need a system that gives you control over testing, scaling, and what gets budget.
Marin's spent $150M+ figuring this out. Now you have the playbook.
All the best,
Ash
Let us know how we did...
All the best,
Ron & Ash



