Split testing, or A/B testing, is a well-known concept in digital advertising and marketing in general. However, the way I mostly see people do it in practice is the equivalent of tinkering around the edges, rather than delivering the kind of significant improvements you should be after.

Significant Differences versus Minor Differences in Ad Content

Consider these two potential headlines that could be used in a Facebook Ad:

Psoriasis Clinical Study

Help with Medical Research

These two headlines are quite different in tone and content, with the first doing ‘exactly what it says on the tin’ by promoting a clinical study for psoriasis. Whereas the second is more general and has a softer element to it – the headline encouraging people to be helpful, rather than simply being matter of fact about what the advert is promoting.

Which of these is likely to work best from the point of view of attracting patients to register to participate in clinical trials? From my experience, I’d say the first one is highly likely to perform best. But I only know that through having tested headlines like that one against headlines like the second one. And even with the level of experience I have, it’s never certain that the current campaign I’m working on will perform the same as the previous ones. Which is why you can only really tell which headline might work best by testing them against each other.

Notice here I haven’t suggested testing simple word swaps – eg ‘Trial’ for ‘Study’ in the first headline. The reason being that these are not significant enough differences to register all that much with the target audience. (See below for an additional look at this idea).

Of course, it is possible that people respond better to the word ‘Trial’ rather than ‘Study’. But it is highly unlikely to be in especially large numbers – which is the kind of significant improvement we’re trying to achieve when we run a split test.

That is, when we first set up our ads and start seeing results, we want to be able to deliver big improvements as far as possible. Once we’ve got to a point where we’re comfortable that our ‘control’ ad is made up of the best-performing elements, it’s only then that we might look at tinkering with the smaller changes to potentially squeeze even better results from a winning ad.

Testing out a Minor Difference

Once you’ve determined which type of messaging works best for attracting the right sort of people – ie those who will register their interest in participating in your trial (or whatever else you’re looking to achieve with your ads) – you can then get on to testing the sort of minor differences that I often see people concerning themselves with from the outset.

It’s worth reiterating this point – testing minor differences, such as slight word changes or images that are similar but slightly different, will not give you the kind of significant improvement in performance that you’re looking for from your test.

This minor level of testing should only be performed to generate ongoing incremental improvements, once you’re already satisfied that the basics of the imagery and messaging you have in your ads are the right ones for your audience.

When I’m talking about a minor difference in imagery, it would be the difference between having a middle-aged man looking to camera versus featuring a different middle-aged man looking to camera. A significant difference, on the other hand, would be a middle-aged man versus an obviously older or younger man, or a middle-aged man versus a middle-aged woman; or featuring someone obviously visiting the doctor versus someone obviously going about their life as normal.

IRB and EC Approvals of Copy

Of course, within the world of clinical trials advertising, when we’re working on a specific trial we have the regulations imposed by the IRB/EC to contend with. Which means there isn’t a lot of room for adding in marketing-led copy to our adverts, so there might not be much in the way of a significant difference that can be tested. In this instance, it may be that minor variations do actually prove valuable for attracting a person’s attention. For example, with these two potential Facebook Ad headlines:

Psoriasis Clinical Study

Psoriasis Clinical Trial

– there isn’t enough of a difference between the word ‘study’ and the word ‘trial’ to imagine it would have much of an effect if you were to test one against the other. However, by including the word ‘research’ in the first headline:

Psoriasis Clinical Research Study

– we not only take up more space in the ad with the headline, which may help to attract attention, we also subliminally reassure the person viewing the ad of the legitimacy of the trial being promoted. (‘Research’ being a word people will usually associate with laboratories, academics, clinicians and the like).

Ongoing Testing

Of course, once you’ve got your ‘control’ ad setup and are engaged in testing smaller differences for incremental improvements, it’s always worth also trying a major difference from time to time. Audiences and tastes can change, so if you’ve had your main ‘control’ ad content running for a while with only minor testing being done, I recommend you also bung in an ad that’s totally different in style. Maybe a cartoon image compared to the real people imagery you’re currently using. Or a completely different tone of voice in the headline and body copy.

Sometimes, I’ve even gone back to a style of ad that was initially well-beaten by the current ‘control’ and relaunched it – only to be surprised that it starts to perform better than it did originally so is worth including in the mix for testing again.

Conclusion

When developing split tests for your Facebook Ads and other forms of digital advertising, ask yourself if the differences you’re making are really significant, or if you’re simply tinkering around the edges with such things as replacing one word for another that effectively means the same thing.

It’s only through testing elements that are significantly different that you’ll be able to achieve significant improvements in results. Save the minor changes for a future stage when you’re looking for incremental improvements from an ad that’s already performing well.