By Owen Fay . Posted on August 8, 2022

Early A/B Testing Answers Design Questions Fast

When developing new products or enhancing current products with new or improved features, it’s important to understand your users’ experience while the product or feature is in development. That way, user experience can be built into the product before it goes to market.

Usability testing provides the means to unlock a great user experience. With usability testing, you learn what your target users like and don’t like, what pleases or frustrates them, and what supports or hinders their goal with your product.

What if you have a single design question: Which is better?

When considering alternatives for a single design screen or feature, you can get feedback on users’ preferences using a tool that supports A/B testing. A/B testing—also called “split testing”— is a comparative testing strategy that presents one of two versions of a page or screen or marketing message to users: one group gets A—the baseline version and the other group gets B—the comparative version.

Throughout the test, analytics are used to determine which version leads to more clicks, longer interaction times, or higher conversion rates. Google Optimize is one example of an A/B tool, which, in the case of Google, ties directly to Google Analytics. Companies use the results of their A/B tests to determine whether incremental changes in their designs improve user experience, resulting in increased user satisfaction and measures of success such as greater conversions.

A/B testing is a popular tool for interfaces that are currently in use by users. What if you have a single question about your design in development—before it goes live?  How do you get a fast answer to a preference question that works like A/B testing?  A tool like Poll the People provides a platform for answering product preference questions in development.

The advantage of using A/B testing while your product is in development is that you can get fast answers to user preferences about comparative designs and build the results into your product as it is being developed. This is an increasingly important need for developers because of their use of prototyping tools like Figma or Sketch to create designs that look and feel fully functional before they write any code. With the popularity of agile development cycles, you can get the answer to a design question in minutes.

 How can you answer this single design question?

This is where unmoderated testing tools come into play. It’s increasingly helpful to view product or prototype development in a start-to-finish framework, and this is why testing at the rudimentary and developmental stages can bode extremely well for future usability testing. Rather than utilizing usability testing as a final vetting process before launch, you can incorporate it at every stage and adopt an iterative design process.

The best way to test at these rudimentary and developmental stages of the design process is best characterized as pre-live A/B testing. Live A/B testing on platforms like Google or Facebook is expensive and can take days or weeks to complete. Instead, optimize your live A/B tests by testing your designs (Figma or otherwise) at the developmental stages.

Using a platform like Poll the People, you’ll expand your budget by optimizing the return on each dollar spent. Whether you’re doing a brand refresh, a new logo release, or trying to make any high-stakes decision for your business, you can test your prototypes on an unbiased user panel.

Unmoderated usability testing platforms like UserTesting or UserZoom are quicker than setting up a moderated study, but their speeds pale in comparison to platforms like Poll the People.

How to Use Poll the People

Poll the People is meant to be 1-2-3 and done. No hassle or extended campaigns. The first step is to browse the template gallery. Try to find a use case that matches yours; this way, when you’re creating the test, you can tweak a few words, add your own “Option A” and “Option B”, and it will be ready to go.

Choose your Audience

Next, you select your audience, as with the first step, you can tweak it or leave it to fit your situation. The default audience size is 100 respondents (but you can choose any number), as this is a medium between a large sample size that nearly guarantees statistical significance and a small, inexpensive audience size that might not produce reliable results.

The good news is no matter how many respondents you choose, you can always re-run your test with ease on more respondents. You also have the option to target a population that matches your own market. Trying to test between men’s cologne ads? Set your respondents to be men only. The software is built to match your needs, not the other way around.

Launch and Analyze Results

This next part is where the magic happens. The best part: you don’t even have to wait to see it. You’ll start to see your first responses within minutes. These test results will include both qualitative and quantitative data.

The quantitative results will give you a sense of the clear winner, including whether your results are due to chance or one of your test options is actually performing better among respondents (statistical significance). If we stopped here, this would be pretty similar to a live A/B test in that you’d get insights into which — A or B — performed better among your target audience. But we’re not stopping there.

Qualitative Data

At the end of the day, your brand experience is a human experience, and that is why it is imperative that you hear from real people: how they feel, what stood out, their first impressions, etc. The qualitative analysis comes from asking each respondent, “Why did you choose this answer?” It might seem simple and mundane, but the insights are incredibly valuable.

The qualitative analysis will help you improve upon your current prototypes and incorporate the elements that stuck out to the panelists. Furthermore, the responses give you insight into both of your designs, not just the one that is statistically performing better. Your final design might incorporate the positive elements of both versions A and B, while also giving you material for pitch decks and talking points.

At this point, you can improve upon both designs before launching your live A/B test, which will be expensive but also insightful and beneficial.

Price Benefits

This begs an important question, however: how can my budget accommodate both a live and pre-live A/B test? Simple. Poll the People is created to be budget-friendly — for both your fiscal budget and time budget. You can launch as many or as few tests as you need to answer your design and prototyping questions, by isolating one element — like a CTA or color scheme — and running a small-scale and cost-effective test.

The biggest mistake developers might make is not taking action on what are otherwise actionable insights. The qualitative feedback from the pre-live A/B test can only be effective if incorporated and progressed in the design process. Once doing so, you can conduct a moderated usability test to advance the development process.

Moderated Testing is Conducted in Real-Time

Choosing in-person, face-to-face moderated usability testing as your research method provides the advantage of real-time engagement with your users, but it takes time to set it up. If you use a recruiting company to schedule your participants, you generally need to give them two weeks to screen, recruit, and schedule your participants. If you are recruiting and scheduling participants yourself from your company’s user panel, you can speed up this process a bit, but you still probably need a week to get underway. In addition to the benefit of seeing what users do as they perform tasks with your product, in face-to-face moderated usability testing sessions, you get to read your user’s body language as you engage in the flow of natural conversation.

Choosing In-person remote usability testing also takes time to schedule participants in sessions using collaborative software platforms such as Zoom or Teams, but the timing can be accelerated substantially if you have access to a user panel and can use a calendar program like Calendly to let participants choose their own session times. Like face-to-face moderated usability testing, remote usability testing has the advantage of engaging with users anywhere, not just in your location, so your geographic distribution can cover your whole user base. In addition, you get to see the context in which your users are using your product, most typically their computer or smartphone.

The advantage of in-person usability testing, whether face-to-face or remote, is that you get to engage with your users while they are engaging with your product. During the session, you can ask questions, probe for insights, and adapt your plan to suit the needs of the situation.

A Case Study Shows How A/B testing in Development Yields Results

Consider this case study by UX Firm, a leader in UX consulting and user research. A multi-national fast-food chain wanted to test different designs to understand their customers’ preferences for using rewards to apply to purchases in the mobile app. There were several key concepts they wanted to test to learn which designs resulted in users having a better understanding of how to use rewards when making purchases. They knew that many users of their app were building up rewards points but not redeeming them, and they wanted to understand why. Was it a confusing design?  A miscommunication of how to use points to make a purchase? A lack of awareness of the availability of redemption awards?

So, they decided to conduct remote moderated usability testing. Using one of the popular platforms for this kind of testing, UX Firm set up the screener to recruit participants who had made purchases using the app, so that they knew the participants were current customers. They also set up scenarios where everyone started out with the same first task, but then for the second task, they alternated between two Figma mid-level design screens, in which half the participants went to the A design first, then the B design, and the other half started with the B design and then went to the A design. In total, they tested the two versions with 16 participants.

The result? Inconclusive. There was no noticeable difference in user experience in either prototype. So, what were the designers to do?  They had to make a choice when no clear choice was indicated from user testing.

What if they Used Poll the People

What if they had conducted a quick pre-live A/B test before writing any code? Using Poll the People to get feedback about designs in development, UX Firm could have done a quick study to get a lot more than 16 responses. For not much investment of time or money, they could have requested 100 responses or more, with results in minutes, at most hours. Plus, if these results still didn’t indicate a conclusive result, they could have tested on another 20 or 30 participants or whatever number it took to pick a winner.

Poll the People would have also allowed them to isolate and test specific elements that might have contributed to this drop-off in the usage of the rewards program. Any webpage is like an orchestra: there is a multitude of elements — buttons, titles, animations, etc. — that work in harmony to accomplish a greater goal. By isolating each of the elements of designs A and B, this fast-food company could have understood their users’ pain points and gotten a sense of which element would perform better.

Perhaps it was the element that displayed how many total points the app user had accumulated through this rewards program. While this is the most central component of most rewards programs, it might have been lost in the fray of a dynamic dashboard or other confusing details.

By launching a pre-live A/B test, this fast-food company could have gotten feedback on the rewards program dashboard, including main takeaways, first impressions, whether the problems lie with a single component or many components, and other important feedback. With this feedback in hand, they could have improved their designs and then consulted UX Firm to launch a moderated usability test.


Should one of these techniques take the place of the other? This would be a losing strategy; pre-live A/B testing exists as a method to optimize your live test — whether it be a moderated usability test (the case in this study), a live Facebook test, a live Google test, etc. The best way to create effective content is to pair the two strategies.

Pre-live usability testing can be a very effective tool for marketers, designers, and organizations of all shapes and sizes. It might help identify the most important features of a design or issues before launching them. Eliminating the risk of having to do rework or launching a flawed concept.

If you want to start optimizing your designs and add pre-live testing to your research toolkit, sign up for free and create your first test in minutes.

Owen Fay

Leave a Reply

Your email address will not be published. Required fields are marked *