We’d recently set up a split test for a client on their homepage, that I thought would absolutely kill it. There were huge improvements in the copy, design, and overall strategy.
We expected a huge conversion improvement – and I anxiously checked our split test reports every day, waiting for the conclusion.
The First Stage Of Results
The split test did okay. After some exciting leaps in conversion ahead early on, the results stabilised and stopped looking so exciting. After 26 days, the improvement was only 4.73% on the original homepage.
“4.73%!?!” You can imagine what I was thinking. We couldn’t believe it, but the data doesn’t lie… we’d only managed to improve conversion by 4.73%. It does prove that one should always test, but considering our usual split test results, I wasn’t too happy.
So, I decided to redo the test – with some strategic tweaks. I re-evaluated the page from top to bottom and found a couple of key things that were worth testing:
- We’d added in buttons to the splash to segment the audience that were sending visitors to pages that weren’t yet optimised for conversion. There weren’t any navigation paths there before. Could this be distracting to conversion?
- The form had moved further down the page. While it was still visible, was it still AS visible as the old form?
- We’d tested a new type of CTA on the page. Instead of “Quick Enquiry”, we’d changed to “Free ____ Proposal”. Could this language be wrong for the visitor, and reducing conversion?
We redesigned the homepage with these questions in mind. The new homepage had the following changes (see the screenshots for an anonymised comparison):
- Removal of buttons from the splash.
- Form higher on the page.
- Addition of an arrow graphic to increase attention to the form.
- Change of call-to-action to Enquire Now
That was all we changed! Everything else remained the same, including all the copy and any of the other design elements.
Then, we waited for the data to come in…
The Second Stage Of Results
Conversion had now increased by 21.86% when compared to the control – an improvement of 362.15% from the original test. Hooray!
Original Test New Test
+ 4.73% +21.86%
This was a really interesting example of why you should always test – and why you shouldn’t give up after one test. If we hadn’t tested at all, we never would have known it was only a slight improvement.
Our initial hypotheses were “sort of” right (it did improve conversion…just not by much…) – but the next round of testing proved to be much more beneficial. With a few strategic changes, we were able to get much better results. A quick win for the client and for us!
(Edit: We’ve continued split testing with this client since this test concluded. We used what we learnt here in the next round– and our next test increased conversion by 210%! A good reminder of why it’s important to KEEP testing.)