A/B testing your pricing sounds great in theory.
Just dream up two pricing schemes, test each with half of your website visitors, and voila! All you have to do is pick the set of prices that perform better.
Thanks for reporting a problem. We'll attach technical data about this session to help us figure out the issue. Which of these best describes the problem?
Any other details or context?
If only pricing was so simple. While A/B tests are a phenomenal way to refine your product and marketing, they're ruinous to your pricing. Running haphazard experiments is a recipe for angry customers and inaccurate data collection. Solid pricing strategies aren't built on a foundation of guess-and-check.
Let's take a look at the issues with A/B testing your pricing and walk through a rigorous and systematic approach to figuring out what people will pay for your product.
Why A/B Testing Your Prices Sucks
Statistically, the odds of you having enough traffic to draw conclusive results from an A/B test are low.
Even if you do, the results you get from A/B testing are going to be relative—derived from whatever prices you tested before—rather than objective. You're not going to find the best price for your product, just a slightly better one than you had before.
Lastly, A/B testing is fundamentally improvement through guesswork. When a page as sensitive as your pricing page changes based on guesses, you're going to annoy the hell out of your customers.
The concept behind A/B testing is simple, but the math behind it is not. To achieve actual statistical significance—to get results you can conclude affect conversions—you need a huge sample size. The required sample size is modeled by the following equation:
Where δ is the minimum change in conversion rate you want to detect and σ2 is the sample variance of your current conversion rate. Let's say your pricing page is doing decently well already—you're getting 10,000 hits per month and converting 5% of that traffic.
If you wanted to test for a 10% lift in your conversion rate—from 5% to 5.5%—you would need a sample size of 30,244.
The math is even worse if you're converting at closer to 2%. If you wanted to test for a 10% lift on your 2% conversion rate, you'd need a sample size of 78,039.
You could run an eight-month A/B test to get those 78,000 hits, but you'd only be aiming for an increase from 2% to 2.2% on your conversion rate. You could spend those eight months much better.
The math behind A/B testing your pricing page gets even more dire when you start to think about your different buyer personas. Separate tests will be needed for different kinds of customers, so you'll need to be able to segment your traffic by persona—a serious task.
For the vast majority of companies, A/B testing is simply not a viable tactic for drawing statistically significant conclusions about your pricing page. For companies where hundreds of thousands of people do visit their pricing page, it's still not the best way to get better.
Relative vs. objective
Even if your company's pricing page gets thousands of visitors a day, you're still going to sell yourself short with A/B testing. You don't arrive at the perfect price for your product by making small, incremental improvements to your current one. The hill climbing algorithm, also known as the local maximum problem, provides an elegant illustration.
The hill climbing algorithm is an iterative approach to problem solving used when you're not sure what to do. It works like this:
Make an arbitrary change
Check whether the result is an improvement
If it isn't, undo it and go back to 1.
If it is, use it as your new starting point for 1.
Imagine you're a blind hiker trying to climb the hill below, starting from the bottom-left corner:
You'd start walking arbitrarily, constantly assessing the result of each step you take—am I higher now than I was before? Keep going. Am I going back down? Turn back and try again.
This method would be very effective for climbing one hill here—the local maximum on the left—but it would be highly unlikely to lead you to the global maximum on the right.
A/B testing can similarly help you find local maxima. But you find global maximums in your pricing strategy by surveying customers, talking to them, and putting in the effort you need to understand willingness to pay. You have to learn about customer valuations, and you can't do that by running a rote algorithm.
Two-Timing Your Customers
Even if we set aside the ethical issues involved in selling to different customers at different price points at the same time, the prevalence of social media means that significantly A/B testing your prices is a huge risk.
If you're testing whether visitors will be more receptive to a price of $49.99 or $49.98, then it's likely that no one will notice, but your gains won't have been worth it. If you're testing between $5 and $50, then the potential reward is big—but the odds of detection (over the course of a statistically significant A/B test with average traffic levels) jump to a virtual 100%.
Even if you're testing a more modest difference in pricing, you're at risk of running into the price anchoring problem. Price anchoring is the psychological tendency people have for valuing your product according to the first price they experience. If you anchor people at a low price and raise it later, then no one will see it as getting new value. They'll see it as gouging.
You're going to lose sales by turning your pricing page into a chaotic palimpsest of all your various A/B tests. You will lose the customers you win when you try to raise prices because they will have been acquired on faulty premises. Even if you manage to A/B test your prices “under the radar,” you'll still feel the pain eventually. That's because it simply won't produce results.
Improvement By Learning
To implement value-based pricing, test your customers for their willingness to pay using surveys.
Make sure your questions are framed to create as little cognitive load as possible. For instance, ask what features are most important to your users and which are least important rather than asking for a list ranked in descending order of usefulness:
When it comes to understanding the actual price points that your customers are willing to go to, ask the question you want to ask and ask it directly—when is our product too expensive? When is it getting too expensive?
In order to up your response rate, you might be tempted to offer people incentives for answering your survey. But these will more than often be ineffective on the kinds of people whose input you want. Instead, summon a collective spirit and include the recipient of the survey in it. Make them feel like they're part of a larger mission to make things better and give them a call-to-action that makes them feel good.
Once you get your responses, start looking through your data for patterns. Take the features regarded as “most useful,” “least useful,” and people's willingness to pay—and start implementing your value-based pricing strategy.
What Is A/B Testing Good For?
We're not trying to say that A/B testing is worthless.
A/B testing is a highly effective way to optimize landing pages, design choices, and other site elements when you know the problem you're solving for. For instance, you can try to optimize the checkout button on your e-commerce store if you find that people are abandoning their carts in the middle of the flow.
But pricing is a complex problem. When someone arrives on your pricing page, they're thinking about your product's value, its features, their goals, Q4—there are a ton of factors on their mind when they're making that purchasing decision.
True pricing leverage is found when you learn about those factors through rigorous research and determine the value that customers are getting from your product. Your goal here is understanding your customer's willingness to pay, not manipulating them into paying more with a penny-and-decimal-point growth hack. In our book, A/B tests for your pricing get a solid "F."