You don’t have to wait until you have a fully engineered product in the market to get feedback from customers for the first time. Split testing, or “A/B testing,” is a hugely popular marketing tactic for optimizing collateral and improving conversion funnels early in a campaign, before making major investments in any particular direction. Similarly, product teams can run tests using alternative prototypes or mockups to save time and money while also unearthing potential for innovation – all before writing a single line of code!
Follow this 5-step split testing process and accompanying example to generate qualitative and quantitative data with prototypes:
- Formulate a hypothesis: Begin with a decision that needs to be made, and work backward toward the assumptions or questions that need to be tested or answered in order to be as informed as possible. You have limited resources and stakeholders simultaneously clamoring for a number of features. You can’t do everything. You have to choose amongst alternatives, each of which has their own merits. Frame your assumptions and success criteria that will ultimately drive whatever individual decision you need to make.
- Create alternative prototypes or mockups: Once you have a clearly defined hypothesis, you have to create stimuli that will generate the user feedback you need to validate or invalidate your hypothesis. If you are prioritizing features on a roadmap, create a prototype that highlights the feature along with a prototype that doesn’t include the feature. If you are determining which possible implementation of a feature would perform best, create a prototype of each.
- Source your target audience: Once your prototypes are ready, you’ll need access to your target audience to generate feedback. You’ll need a small sample for qualitative testing and a larger sample for quantitative testing. Depending on what you’re looking to learn, you may want to stagger and reorder qualitative and quantitative testing to maximize insights.
- Generate authentic feedback: Alpha’s rapid consumer feedback platform has best practices built into testing, but the rule of thumb is to do your best to simulate what would be an authentic shopping or evaluation experience. In the real world, customers usually have multiple options when making a purchasing decision. They’re usually in a specific mindset when shopping for certain types of offerings. Your testing methodologies should simulate these experiences and mental models, and include hard-hitting questions that generate authentic behaviors and responses.
- Objectively evaluate results: Evaluating test results is often just as, if not more, difficult than generating the data in the first place. I recommend two techniques to mitigate bias. First, never rely on a single data point to make a decision. Iteration, robustness, and replicability are the most powerful weapons in your arsenal. Second, try to separate the ideation process from the evaluation process. Don’t let individuals who are emotionally invested in ideas also be the ones to evaluate the test results.
Sometimes techniques are easier to follow when you see them in practice! So here’s an example of testing we independently conducted at Alpha for a well-known product: the Chase Sapphire Reserve card. It’s the most popular credit card ever launched, and reignited debates around ‘churners’ – people who sign up for a credit card to collect bonus perks, and then cancel the card. It’s one of the most infamous challenges in the industry, and costs credit card issuers millions every year.
Our hypothesis was that credit card companies leverage behavioral economics to increase the perception of value of their offerings while simultaneously mitigating the risk of churn. They could do this by spreading out rewards over a longer period of time, so that the total amount earned per customer could be more, but only if customers didn’t churn. We recognize that this change could pose other business challenges, especially around predictive financial modeling, but that testing could still generate meaningful and actionable insights.
Using Alpha’s built-in designer network, we requested alternative landing page mockups and had them ready within 12 hours. Variant A was a typical card with a 50,000 point signup bonus; Variant B offered annual bonuses of 20,000 points; while Variant C offered 3,000 points every three months. From a purely mathematical standpoint, and ignoring other factors, the cash layouts neutralize as such: 1 year of Variant A = 2.5 years of Variant B = 4.16 years of Variant C
Using Alpha’s programmatic access to audiences, we sourced people who had recently signed up for a credit card. We randomly split them into three segments, each of which would see just one of the offers before being asked the same series of questions.
By asking the same series of questions to each segment, we were able to generate actionable and comparative insights for the offerings. There are hundreds of available credit cards out there, so any testing can’t be done in a vacuum. Your data is much more informative when it compares alternatives rather than absolute values.
In our example, Alpha’s powerful data analysis capabilities evaluated the results and validated our hypothesis. Perks spread out over time would indeed increase the perception of value while simultaneously lowering planned churning:
Keep in mind that this is just one of many possible applications for split testing. Clients have run thousands of tests on our platform, including:
- Testing brand value: How would user reactions change to the exact same app with different branding? Does a recognized brand add or subtract to the perceived value compared to establishing a new identity?
- Prioritizing features on a roadmap: Which feature would add the most monetizable value to a product?
- Focusing on a niche audience: Instead of split testing prototypes, you can split test audiences to determine which demographic and behavioral factors contribute most to buying preferences.
- Entering a new market: How does language, culture, and geography impact user preferences and behaviors?
Split testing prototypes can deliver actionable data without having to put your brand at risk, invest in engineering, or navigate internal politics. It’s a cost-efficient methodology that leading products teams are using to build better products faster.