Process Improvement With A/B Testing

Posted 30/05/2016

There’s only so far you can get in improving a processes efficiency from previous experience alone. Stepping up a gear and using metrics with a structured process can make a big difference in certain scenarios.

A really easy to use method is A/B testing. It’s simple enough to start without any specialised tools, but there’s a multitude of them available that can help with more complicated experiments and scenarios.

A/B Testing is a method used to measure the effect on a desired goal by changing one item / element. As the name suggests, during one pass through the process, item "A" might be used, whilst another pass uses "B". The number of times the goal is successfully reached is recorded against the "A" or "B".

Example uses of A/B testing:

  • Improving a sales funnel (on a website, or other elements of a multi channel approach). This might involve changing an image or text content on a website, or a phone sale being driven from different adverts (where 2 different adverts are identified by 2 different promotional codes).
  • Improving the user experience of a product. I.e. increasing the number of options in a menu, the number of steps / size of each step on a form, size of buttons, or descriptive text copy.
  • Improving a process that involves people or teams. I.e. 2 similar teams of software developers, where one uses a new tool, technique or process variant. (Note this is strictly not an A/B test, it’s a form of multivariate test, see below.)

What makes a good test?

For an A/B Test to be effective and reliable, it needs to have a goal that is reached in a clearly identifiable way. The goal should also be the element that you want to improve. For example, in the sales funnel example the goal would typically be that a sale is made.

The results are also more reliable given a larger set of participants and test iterations.

The user experience example is slightly more complicated, since this is arguably subjective and internal to the user. In cases like this is can be useful to measure a primary goal, and a number of secondary goals / metrics. E.g. - the primary goal is the user can get to the end of the process without dropping out, and secondary measurements are the time taken, and the selection of a satisfaction score (clicking smiley / sad faces etc).


This brings us to segmentation within the test. Both the demographic of the users (male / female / age ranges etc), and the groupings of test results can help to bring out more insightful results. Capturing this data during the test is critical to be used within the result analysis, however care should be taken with regards to privacy and data protection laws.

Segmentation can be the difference between a given test seeming to have no difference, and identifying that age groups 18 - 25 reach the desired goal 25% less, where as 40 - 55 year olds reach it 15% more with "A".

A/B Test strategy considerations

  • Ideally A/B tests should be ran for just long enough to obtain the results, and no longer. Amongst other reasons, this will allow you to start the next test sooner and try more hypothesis.
  • Run tests for long enough to cover cycles relevant to users. For example > 1 week will include weekends, and the affects they might have on results.
  • Choosing appropriate tools will make it easier to setup tests, collect accurate results and visualise them in reports.
  • In areas where A/B testing is more mature or easily enabled such as websites, or digital products; platforms are available that can support built in A/B testing, as well as pushing the successful result to production.
  • For critical processes or products, it’s sensible to apply the A/B test to a subset of users. Pick a percentage that would still allow for a statistically valid test result.
  • In cases where there are not a lot of users available or the traffic through the process is low, the expected improvement to the goal needs to be much higher, or the test ran for much longer. It’s possible to push the result of this kind of test into production, and see unexpected results.

Comparison with Multivariate Tests

Multivariate testing is similar to A/B testing, but it involves changing multiple variables within an experiment. Use of tools and platform support are more critical to multivariate testing. It’s possible for some platforms to run an experiment, and identify the best combination of variables within the test that optimises the desired result.