Conversion Rate Optimisation can be extremely hard to complete and make informed decisions for low traffic websites. The lower the traffic the longer it’s going to take to get your results back and the less effective your efforts can be as you want to avoid doing multiple tests on a website.
For example if you needed 1,000 visitors to get enough data for a test and you only get 1,000 visitors a year it would take a year to conduct the test whereas if you had 1,000 in each month you’d be able to conduct 12 tests in that time.
When looking at how much data you need it might be better to look at your current levels of conversions and work backwards from there. We’d say you’d want a minimum of 20 conversions in a period and we’d recommend trying to get tests conducted at around a month’s length whenever possible. Therefore if you currently have a conversion rate of 2% you’d need 1,000 visitors each month, split between the control and the tested variables (10 conversions each).
However the more data you have, the more reliable the test results will be. You should therefore always try and get as much data as possible.
In an ideal world a/b split testing should always be completed in order to find out whether Conversion Rate Optimisation is working or not. What this means is that you have 2 versions of a page running at the same time with different people being sent to different versions. The different versions should be:
A: The control – this is what the page was originally like
B: The experiment – this is the page with the changes you want to test
You can add more tests, however more traffic will be required to see which version is the best.
As you collect both the data at the same time you have a direct comparison between which version is performing better than the other. As they are running at the same time there’s also less chance of external factors such as seasonality and your brand getting stronger over time affecting your test results.
There’s also an option to do on off tests if traffic is low, however these can be affected by external factors and therefore may not give the most reliable data.
There are advantages of doing either and what your current position and your goals are will ultimately determine which approach is best for you.
Making lots of small changes you will likely see improvements over time and with each test you will know exactly how the changed element has affected the result. This is good for when you already have a good conversion rate and are happy to see results over time.
Making big changes each time makes it less clear as to which elements are making the page perform better or worse. However it’s more likely that the test will make a bigger overall difference. This means it makes sense to do these if you currently have a poor conversion rate and want to try and get results fast. The downside is that if the test comes back with negative results you won’t be clear as to why.