Swing Bigger, Win Bigger 🏏

A Scientific Argument For Making Bigger Bets

Hey there, and welcome back for another bite to chew on.

We hope you've had a great weekend with your loved ones - and are ready to kill your todo-list in the week ahead.

Before we dive into todays newsletter, we just wanted to share the list of retail partners we’ve been working with. We got upwards of 100 emails asking for this after last weeks newsletters - so we thought we might as well share it with everybody. Click here to see it

This week - we're super pumped to tell you about a "way of thinking" (for lack of better words) which has been sort of like our northern star in the way we operate our business and generally think about growth.

Simply put - we're going to talk about the importance of making bigger and bolder changes/tests and show you the statistical proof for why it's better from a growth perspective.

This will be a heavy one - so sit back and let's dive into it

Most of what you've been taught about A/B testing is wrong!

Every single one of you has probably heard about the compounding effect and how a small incremental 2% improvement here and 3% there results in a crazy growth rate when stretched over a long enough period.

Now, that's all good - but the problem is that many people mistakenly take this concept and apply it to A/B testing, which may not be as good as you may think.

In fact - using the methodology where you're trying to make a bunch of small improvements over and over again may be the death of your company…

That being said - we do appreciate the justification for focusing on smaller improvements which is A) They are easier and faster to come up with and implement, and B) they still contribute meaningfully to the bottom line.

Now - let's dive into the numbers and discover the reason why it makes better sense to best big moves than small ones.

A Tale of two strategies

We're going to assume that we can choose between two different strategies, namely A and B.

Strategy A has: 

- 50% chance that the performance of the new variation is better than that of the old one (i.e. a successful test as the new variation has improved conversion rates) and

- Each successful test yields a 5% uplift in conversion rate

Strategy B has:

- 20% chance that the performance of the new variation is better than that of the old one and

- Each successful test yields a 10% uplift in conversion rate

We can also re-name the strategies to "Making bigger changes with less success rate" and "Making smaller changes with higher success rate."

Now let's map out how these two different testing strategies would play out.

Based on this, the natural conclusion to make is that strategy A is better than strategy B, because the overall cumulative lift is greater.

… but hold on a second

Because this is not the full picture, we're missing some very crucial information.

That is - the sample size required to tell whether a test winner is **Actually** a test winner - or whether it was all just due to pure chance.

And in order to do this - we need to talk briefly about two statistical concepts that are super important to understand.

Picture this: you flip a coin twice. There's a good chance it'll land on the same side both times, right? But if you keep flipping that coin, you'll eventually see an equal number of heads and tails. That's just how averages work in statistics.

The same thing goes for split testing. The more tests you run, the closer you get to the "real" results (the mean, for the statisticians out there). And to put a stamp on a test, you need statistical significance. That's just a fancy way of saying, "Hey, this wasn't just a fluke!"

Now, the magic of split testing is that we don't need to know the "real" conversion rates. We're just interested in which one is higher. And this is where confidence levels come in. They're just a measure of how sure we want to be in the result we've gotten. This depends on your risk appetite - which we'll talk more about in the coming newsletter, but most people typically choose a 95% or 99% confidence level to run their tests on

Alright - so now that we have clarified those terms and given you a brush-up on your college stats class, we're ready to continue.

The inverse relationship between sample size and marginal uplift

Now we've arrived at the fun part of this newsletter.

The thing with sample sizes is that the higher the marginal uplift (or decrease) your test can potentially give - the smaller the sample size you need to say whether that test is statistically significant or not.

Likewise - for tests that yield lower marginal lifts, you'll need a higher sample size to conclude the test.

Alright, so far, so good - now let's go back to the example we gave earlier and map out the marginal sample size that we'll need to conclude on a test winner with the two strategies mentioned above.

Adding to that - we're also going to assume a current baseline conversion rate of 10%, a 50/50 split between control traffic and variant traffic, and a chosen confidence level of 99%

In the table below - the sample size required to declare the test statistically significant is denoted by n

As you can see in the table - the marginal N's on the different strategies are very different. This basically means that you'll need a lot more data to detect a modest 5% uplift.

In practice - this means that if you're looking for a modest 20% uplift in conversion rate, following strategy A will take THREE times longer to get you there compared to strategy B. This is simply because of the amount of data you'll need to conclude on any uplifts (721k vs 237k)

In other words - taking bold bets can allow you to take bigger leaps, faster.

So the conclusion for this weeks newsletter is: Make bigger swings. Even if you fail more tests, you’ll still win bigger and faster in the long run.

TLDR:

If you scrolled all the way down here to get the TLDR - here it is:

There's a persuasive statistical case for going big or going home, especially if your company is still in its early years. Dare to shake things up, play with fire, and run tests that could either set your product/marketing/offer (or whatever you're testing) ablaze or leave it out in the cold.

Also - embrace the risk of failure. Because you have to remember that even if Strategy B gets you a win in 1/10 tests you run - you'll still be better off.

If you read this far, just know that we appreciate you

As always we want to say - thank you so, so much for taking time out of your busy schedule to spend a few minutes reading our stuff.

If we can help you in any way - then feel free to ask us any question you may have by replying to this mail or DM’ing us on Twitter or LinkedIn.

… and if you want us to help you get some more Twitter / LinkedIn impressions and want to help us get this newsletter out to many more people then do the following:

1. Make a post on Twitter or LinkedIn where you share your honest opinion on how you like the newsletter
2. Tag us
3. We’ll comment, repost, and like it on all platforms to thank you for it.

Talk to you again on Wednesday

Until then - stay safe and take care.

Yours truly
Ron & Ash