Version A: Crappy starting version.
30 visitors later, Unbounce was telling me:
|Version||Conversion rate||# of visitors|
|B||25%||30 visitors||Winner with 99% confidence!|
What, you cry out? 99% statistical confidence in just 30 visitors?!
Ask yourself, what was it so confident about?
That Option B was better. Maybe only slightly better, but better.
The test did not tell me that Option B would continue to perform at 25% or would be 15% better than Option A - just that Option B is very likely to outperform Option A in the long run.
Split testing only tells you which option is better, not how much better.
Get it? In a split-test, the only number you can really act on is the statistical confidence of which option is better. The conversion rates, impressions and click-through rates are not reliable as predictions, just the winning option. That’s why you don’t necessarily need big numbers to get confidence.
Split testing is a tool to learn and improve quickly, giving you confidence of one option over the other. You can use it to evolve quickly and with confidence.
If you have a big winner on your hands, split-testing will tell you this quickly. So, especially when I’m starting, I look for big wins quickly. If my first test, say about a picture or a headline, doesn’t give me statistical confidence after 100-200 visitors, I usually scrap the test.
I would rather quickly abandon a version that might have worked better if I ran the test longer, because I can better invest that time in testing other things that might yield a big win. (There’s a balance to be found with sampling error here, but since I’m testing frequently and moving forward with so many unknowns, I accept false negatives in the interest of speed, and address sampling error when I’ve found a hit.)
This is how split-testing gives you actionable results fast.
Thanks Tendayi Viki and Andreas Klinger for reviewing this post.