Get the newsletter search marketers rely on.
Manual experiments and testing
Now, if you’re a PPC purist – or you need to use a different source of truth for your experiments – manual options are for you.
Here are a few manual methods that’ll keep you in the driver’s seat:
A/B testing (DIY edition)
You create two variations of an ad or landing page and compare their performance to determine which one works better.
It’s straightforward:
Split your audience.
Run the two versions.
Watch the performance like a hawk.
When the data rolls in, you get to declare the winner.
Sequential testing
This might be your best bet if you have a small audience or niche market.
You run one version of your ad for a while, then switch to a new version.
There are downsides, though:
It takes time.
Seasonality might skew the results.
It works well for things like account restructures, where it is difficult to form control and test groups and changes to the whole program are needed.
Geo-split testing
For detail-oriented marketers, you can run two different campaigns targeting different geographic regions.
Want to see if your ad resonates more in Chicago than in San Francisco?
Geo-split testing gives you clear data.
This is one of my favorite test designs as it allows us to use any back-end data (Salesforce, Shopify, sometimes Google Analytics) and helps establish causation – not just correlation.
Dig deeper: A/B testing mistakes PPC marketers make and how to fix them
One of my favorite analytics initiatives to run with our clients is media mix modeling (MMM).
This model analyzes historical data using data science techniques such as non-linear regression methods, which account for seasonal trends, among other factors.
Because it’s based on historical data, it doesn’t require you to run any new tests or experiments; its goal is to use past data to help advertisers refine their cross-channel budget allocation to get better results.
MMM has lots going for it:
There are robust open-source tools.
It doesn’t rely on cookies to be privacy-compliant.
It gives you at-scale insights that can lead to transformative growth and performance gains.
It’s more powerful than any testing method but also high-level. It can answer questions like “Is Facebook incremental?” without running an experiment.
That said, having an expert at the wheel is important when building and interpreting MMM analyses.
In some cases, dialing up or down specific initiatives might help improve your MMM’s accuracy.
This is especially true if you’ve historically run many evergreen initiatives, which makes it hard to dissociate the impact of multiple initiatives at once.
Dig deeper: How to evolve your PPC measurement strategy for a privacy-first future
Balancing testing, experimentation and performance in PPC
So, should you be testing or experimenting?
The answer is clear: both are essential.
In-platform testing and experiments provide quick results, making them ideal for large-scale campaigns and strategic insights.
Meanwhile, manual experimentation offers greater control, leveraging your first-party data for deeper account optimizations.
As emphasized earlier, the most successful marketing teams allocate resources for both approaches.
If you’re already planning for 2025, consider outlining specific tests and experiments, detailing expected learnings and how you’ll apply the insights to drive success.
Contributing authors are invited to create content for Search Engine Land and are chosen for their expertise and contribution to the search community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. The opinions they express are their own.