6 Steps for Building a Well-Oiled Content A/B Testing Machine

Bar Zukerman
4 min readJan 6, 2021

The Content Department at Natural Intelligence runs a robust A/B testing program. With well over 1,500 tests under our belt, we’re happy to share some of the practices that help us maintain an impressive success rate.

If you’re interested in making a real business impact through content A/B tests, follow these 6 steps:

Step 1: Document everything

The first step you should take is to document everything. Documentation can simply be done on an excel sheet or any other data visualization platform. While some members of your organization may find it dreadful, documentation is a crucial process to put in place.

Maintaining a record of all A/B tests that is accessible to all members of your testing program is beneficial for a number of reasons:

  • Learning from successful tests and attempting to replicate similar successes
  • Learning from failed tests and making sure not to repeat them
  • Tracing patterns that can’t be identified as a result of a single test
  • Promoting alignment, transparency, and collective learning within the organization

So what should you document?

  • Serial number
  • Landing page
  • Test type
  • Current status
  • Start date
  • End date
  • Hypothesis
  • Results per KPI
  • Statistical significance level

Or anything else that matters to you.

Pro tip: It’s important to turn test documentation into an integral part of the test setup process. When you launch a test, document it right away, and when it concludes, log the results in your database at that very moment. This way, it will only take a few seconds every time. When you leave documentation to the very last day of the month, quarter, or even worse — year, it becomes an extensive, dreadful task.

Step 2: Classify A/B tests by “type”

To make the most of your database, come up with a closed list of test types. At Natural Intelligence, our list of test types is informed by either the nature of the copy tested, for instance, “geo-specific content”, or the content element tested, for example, “H1” or ‘’CTA button”.

The ability to filter our A/B test database by test type and to see their aggregative results all in one place, helps us make strategic decisions about our test roadmap. This way, we are able to identify which test types generate a greater impact and are worth our resources, and which test types don’t make a significant difference and shouldn’t be a priority.

Step 3: Formulate strong hypotheses

A strong hypothesis ensures that you learn as much as possible from an experiment. Optimizely said it best: “Running an experiment without a hypothesis is like starting a road trip just for the sake of driving, without thinking about where you’re headed and why.“

At Natural Intelligence, we’ve adopted Optimizely’s hypothesis template: “If [variable], then [result] due to [rationale].” The “if” is the content element being modified, the “result” is the predicted outcome which could be an increase in clicks on that element for example, and the “rationale” demonstrates that you’ve informed your hypothesis with research about your visitors.

This leads me to my next point:

Step 4: Research your target audience

Strong hypotheses are informed by research. You’d be surprised at the facts and figures you’ll dig up when conducting a little bit of research about your target audience. The more you understand your users and what they are looking for, the more likely you are to test microcopy that corresponds with their needs.

A cheap and quick way to research your target audience is reading real user testimonials which can be found on brand websites or consumer review websites like TrustPilot. Quora, Reddit, and other social media platforms are also good resources. The benefit of learning about your user base through testimonials is that you’ll be able to see exact jargon, slang, or buzzwords used by real consumers and use them in your copy.

More resources include publicly available reports and surveys, Google Trends, your own website or app data, and user polls. Additionally, qualitative research through user interviews can be very valuable.

Step 5: Prioritize. Prioritize. Prioritize.

Running an A/B test program is pricey. There’s a lot of time that goes into ideating experiments, building a strong pipeline, as well as launching and monitoring multiple tests at once. Therefore, it’s paramount to use your time and money wisely by never compromising on prioritization.

There are many frameworks out there for how to prioritize A/B tests and build a strategic roadmap, but here are some practices that we follow at Natural Intelligence:

  • While testing on desktop, focus on the area above the fold
  • Opt for test types that tend to generate positive results and high monetary value
  • Run tests that have proven successful on other similar landing pages
  • Avoid tests that have failed across multiple similar landing pages

Where you choose to run your tests is just as important as which tests you decide to run. By focusing on high-visit, high-revenue experiences, your test program will be likely to make a greater impact on business results.

Step 6: Regularly sync and share knowledge

At Natural Intelligence, we hold a weekly tests sync in which members report on concluded, running, and planned tests. It is also a forum where participants can bring up interesting research findings, insights, and ideas. In fact, some of our most successful tests and strategic testing decisions were born in this very meeting.

Increasing the communication around audience research, test hypotheses, lessons learned, insights, and patterns, has contributed to a stronger optimization team and a much more strategic roadmap of high-quality tests.

Now go ahead and make mistakes

Even if you don’t reach your success rate target, don’t feel too discouraged. Failures teach us profound lessons that pave the way towards future success.

Famous American Businessman, Thomas J. Watson once said: “Would you like me to give you a formula for success? It’s quite simple, really: Double your rate of failure.”

--

--