The Opportunity: Black Friday/Cyber Monday


Chris Bretschger

November 8, 2017

Black Friday is on the horizon, and for many of our clients, this is make-or-break time for the year. Last year alone $1.93 billion was spent online in the U.S. – nearly a 12% increase from the year prior – and projections have continued growth in 2017. Cyber Monday beat out Black Friday with nearly $3.5 billion in sales last year, and we expect to see the same trend this year.

With that being said — is there anything you can do between Black Friday and Cyber Monday to make the most out of this high-stakes weekend? Of course there is, with the conversion volume increasing during this period, it’s an ideal time for A/B testing, yet we see pitfalls in the campaigns we take over that can lead to real losses during this competitive season.

The Shortcoming:

A/B testing is a term thrown out lightly throughout the industry. It’s as often a product of indecision as it is a vehicle used to find increased effectiveness. The latter being where there is the most opportunity, and the most worrisome pitfalls.

I’m sure for many folks this conversation seems familiar:

A: “We need to make sure we’re doing all we can to push for a strong year-end.”
B: “Absolutely, let’s test our messaging and see what has stronger results!”
A: “Sounds great — get on it!”

And the result is a slurry of work, which often times shows no-to-little discernible outcome. Often times seen as one message “edging out” the other, though more often than not these results are so close due to an improperly set up experiment.

Setting up the Proper Test:

Many of the tests that we analyze as a third-party result in an inconclusive result or “too-close-to-call” conclusion, and this can lead to discouraging marketers from exploring the real power of A/B testing after a number of results come back inconclusive. However, uncovering the power of your message is consistently undermined by a fairly simple mistake – your testing pool.

The primary short-fall for A/B tests in the industry is that the creative is split by impressions — not by individual. This creates the following problem:

– Users can see both of the creatives that are in market
– When that user eventually converts, most attribution models being used don’t understand how to properly associate the value of the conversion
– Whatever rule was set up to drive the model is arbitrary

To test for the actual difference in performance, it’s vital to split your testing groups not by impression, but by user. This requires coordination with your media or programmatic team along with your creative.

Finding the Results:

Once the test is properly set-up, then it’s time for the results.

A proper test is set-up so that you’re structurally ready to analyze the results as soon as the first impression is served — but make sure your sample size is large enough before making any rash decisions about which way to move forward.

Once you have the right sample, find a winner, and make sure your next test is in queue.

Following a healthy testing strategy leads to a happier holiday for all.

https://imwagency.com/wp-content/uploads/2017/11/lead_story_11.17.png

Creative Services

Media

Online

Search Engine Marketing