Written by Brittany Coombs, Demand Generation Manager at Oracle Maxymiser
Optimization testing can help you make important decisions about the experiences you serve your prospects and customers, providing you with data that fuels new marketing insights, confirms old hypotheses, and opens new avenues to conversions and revenue. But all these rewards don’t necessarily come easy. Testing itself has a learning curve—and, trust me, it can be steep!
A common question the Oracle Maxymiser team gets is, “What should I be paying attention to as I build tests for digital experiences?” I’ve collaborated with our friends at Relationship One to explain three basic tenets:
- How to answer the critical questions about goals, traffic, and runtime
- How to choose the right test type for the level of insights you want
- How to help your test reach statistical significance
Answer the Critical Questions
We know, we know—optimization testing is exciting! That’s why we’re in this business. But before you dive in to a brand-new test, you must lay out your objectives. Why are you running this test? What do you hope or anticipate it will reveal? How long should you run it? How many users should it touch?
If you try to answer these questions as you go along—well, in the words of the South Park ski instructor, “You’re gonna have a bad time.” So, do the following first:
- Hypothesize: A reasonable, data-driven hypothesis can help you define your test’s goals. This, in turn, can help you identify the KPIs you should analyze and what success will look like in this campaign.
- Know Your Traffic: Your results can’t be statistically significant (more on this later) unless each experience is served to enough people. Run a campaign that matches the number of users who will likely arrive to see it. Traffic can dictate whether you run an A/B or multivariate test.
- Runtime: This again relates to statistical significance. Your test will have to run for a certain amount of time for it to generate valid data. Runtime varies because it’s based on how complex your test is, so use tools like Oracle Maxymiser’s Test Duration Calculator to get a rough idea and ensure it fits your company’s schedule.
Choose the Right Test Type
There isn’t enough room in this post to discuss all the differences between A/B and multivariate testing (download this free eBook if you want to get granular), but we can still explore the key points of each methodology.
In A/B testing, two versions of an experience are compared against each other. Your traffic is split between Version A, the control, and Version B, the challenger. The goal of this test isn’t to analyze how the individual parts of an experience work together—e.g., the header image, the copy, the CTA button, the color scheme. It’s simply to observe which version converts more: A or B.
Because it maintains such a high-level, bird’s-eye view of each experience, A/B testing is best for comparing two highly similar experiences with just a small difference between them or two highly dissimilar experiences.
If you and your team like everything about an experience except one element—say, a photo carousel—the former ‘highly similar’ use case is good. You can compare Version A, which has a carousel, and Version B, which doesn’t. You could therefore assume any change in conversion rate is owed to this specific element’s inclusion or absence.
But say your team is split more drastically: Version A has a carousel, more text, and a big CTA, and Version B has no carousel, less text, and a small CTA. This is the ‘highly dissimilar’ use case. An A/B test won’t show why one version did better. (Was it the fact there was no carousel? Was it because there was less text?) But it may ballpark a good starting point for future improvements, especially if there’s a big difference in conversion rate.
Multivariate testing (MVT) dives a lot deeper than A/B testing. Instead of comparing one whole experience against another whole experience, MVT compares the individual elements that comprise those experiences. This lets you see not just which experience is the overall winner, but also how elements interact with other elements in that experience, which lets you make more detailed conclusions about what content works and why it works.
In MVT, you compare at least four experiences because you must test at least two elements, each of which has at least two variants. For example, if you test carousel or no carousel at the same time you test big CTA or small CTA, the matrix below shows all the versions possible based on element/variant combination:
You can’t always visualize an MVT campaign with a simple matrix like this, as companies often want to test more than two elements and variants. But hopefully it does help show how microscopically MVT lets you analyze your experiences.
Reach Statistical Significance
If your test doesn’t generate results you can trust, choosing the right test type is pointless. Enter “statsig”, one of the most important factors in optimization testing. Don’t let the phrase intimidate you: Statistical significance is just a measure of how likely it is your test results have not occurred by chance. As statsig goes up, the odds your data is based on false positives goes down.
How do you reach statistical significance? The higher a test’s confidence level, the more certain you can be its results are statistically significant. To raise confidence level, let a test run long enough and touch enough users, which depends on how complex the test is (how many elements and variants it’s comparing). Different companies use different confidence levels, but Oracle Maxymiser recommends clients achieve 95% confidence.
For over a decade, Oracle Maxymiser has helped organizations large and small, in a wide range of competitive sectors, use testing and optimization to drive uplift and boost their bottom line. Other testing products guarantee speed and simplicity. Oracle Maxymiser has that, too, but it’s also one of the few testing vendors to approach testing holistically. Check out Oracle Maxymiser’s top scores in Forrester Research’s most recent report!
Want to beef up your digital optimization strategy or implement a more robust, 360-degree view of the customer experience? Contact Relationship One for a product overview of Oracle Maxymiser and sample use cases.