What's worse than working with no data?
Working with "bad" data.
As marketers, we love to test headlines, call-to-actions, and keywords (to name a few). One of the ways we do this is by running A/B tests.
As a refresher, A/B testing is the process of splitting an audience to test a number of variations of a campaign and determining which performs better.
But A/B testing isn't foolproof.
In fact, it's a complicated process. You often have to rely on testing software to pull the data, and there's a high probability of receiving a false positive. If you're not careful, you could make incorrect assumptions about what makes people click.
So how can you ensure your A/B test is operating correctly? This is where A/A testing comes in. Think of it as a test to the test.
The idea behind an A/A test is that the experience is the same for each group, therefore the expected KPI (Key Performance Indicator) will also be the same for each group.
For example, if 20% of group A fills out a form on a landing page, the expected result is that 20% of group B (who are interacting with an identical version of the landing page) will do the same.
Differences Between an A/A Test and an A/B Test
Performing an A/A test is similar to that of an A/B test; an audience is divided into two similarly sized groups, but instead of directing each group to different variations of content, each group interacts with identical versions of the same piece of content.
Here’s another way to think about it: have you ever heard the idiom, "Comparing apples to oranges"? An A/B test does exactly that — compares two different variants of a piece of content to see which performs better. An A/A test compares an apple to, well, an identical apple.
When running an A/B test, you program a testing tool to change or hide some part of the content. This is not necessary for an A/A test.
An A/A test also requires a larger sample size than an A/B test to prove a significant bias. And, due to such a large sample size, these tests take much longer to complete.
How to Do A/A Testing
Exactly how you do an A/A will vary depending on the testing tool you use. If you're a HubSpot Enterprise customer conducting an A/A or A/B test on an email, for example, HubSpot will automatically split traffic to your variations so that each variation receives a random sampling of visitors.
Let's cover the steps to run an A/A test.
1. Create two identical versions of a piece of content — the control and the variant.
Once your content is created, identify two groups of the same sample size you would like to conduct the test with.
2. Identify your KPI.
A KPI is a measure of performance over a period of time. For example, your KPI could be the number of visitors who click on a call-to-action.
3. Using your testing tool, split your audience equally and randomly, and send one group to the control and the other group to the variant.
Run the test until the control and variation hit a determined number of visitors.
4. Track the KPI for both groups.
Because both groups are sent to identical pieces of content, they should behave the same. Therefore, the expected result will be inconclusive.
A/A Test Uses
A/A testing is primarily used when an organization implements a new A/B testing software or reconfigures a current one.
You can run an A/A test to accomplish the following:
1. To check the accuracy of an A/B testing software.
The intended result of an A/A test is that the audience reacts similarly to the same piece of content.
But what if they don't?
Here's an example: Company XYZ is running an A/A test on a new landing page. Two groups are sent to two identical versions of the landing page (the control and the variant). Group A has a conversion rate of 8%, while Group B has a rate of 2%.
In theory, the conversion rate should be identical. When there is no difference between the control and the variant, the expected result will be inconclusive. Yet, sometimes a "winner" is declared on two identical versions.
When this happens, it is essential to evaluate the testing platform. The tool may have been misconfigured, or it could be ineffective.
2. To set a baseline conversion rate for future A/B tests.
Let's imagine that Company XYZ runs another A/A test on the landing page. This time, the results of Group A and Group B are identical — both groups achieve an 8% conversion rate.
Therefore, 8% is the baseline conversion rate. With this in mind, the company can run future A/B tests with the goal of exceeding this rate.
If, for example, the company runs an A/B test on a new version of the landing page and receives a conversion rate of 8.02%, the result is not statistically significant.
A/A Testing: Do You Really Need to Use It?
To run an A/A test, or not — that is the question. And the answer will depend on who you ask. There is no denying that A/A testing is a hotly debated topic.
Perhaps the most prevalent argument against A/A testing boils down to one factor: time.
A/A testing takes a considerable amount of time to run. In fact, A/A tests typically require a much larger sample size than A/B tests. When testing two identical versions, you need a large sample size to prove a significant bias. Therefore, the test will take more time to complete, and this may eat into time spent running other valuable tests.
However, it makes sense to run an A/A test in some cases, especially if you are uncertain about a new A/B testing software and want additional proof that it's both functional and accurate. A/A tests are a low-risk method to ensure your tests are set up properly.
A/A testing can help you prepare for a successful AB testing program, provide data benchmarks, and identify any discrepancies in your data.
Although A/A tests have utility, running such a test should be a relatively rare occurrence. While A/A test can run a "health check" on a new A/B tool or software, it may not be worth optimizing every minor alteration to your website or marketing campaign due to the considerable amount of time it takes to run.
from Marketing https://ift.tt/2W4vufydigital marketing agency