Search engine optimization (SEO) is an art as well as a science. As with any scientific discipline, it requires rigor. The results need to be reproducible, and you have to take an experimental approach â?? so not too many variables are changed at once. Otherwise, you wonâ??t be able to tell which changes were responsible for the results.
You can glean a lot about SEO best practices, latest trends and tactics from SEO blogs, forums and e-books. But it can be hard to separate the wheat from the chaff, to know with any degree of certainty that a claim will hold true. Thatâ??s where the testing of your SEO comes in. Prove what works and what doesnâ??t.
Unlike multivariate testing for optimizing conversion rates, where many experiments can be run parallel, SEO testing requires a serial approach. Everything must filter through Google before the impact can be gauged. This is made more difficult by the fact that thereâ??s a lag between making changes and having the revised pages get spidered, as well as another lag while the spidered content makes it into the index and onto the search engine results pages (SERPs). On top of that, the results delivered depend on the search history of the user and the Google data center accessed.
An experimental approach to SEO: You have a product page with a particular ranking in Google for a specific search term, and you want to improve the ranking and resultant traffic. Rather than applying a number of different SEO tactics at once, start varying things one at a time.
1.Tweak just the title tag and see what happens.
2. Continue making further revisions to the title tag in multiple iterations until the data shows that the tag truly is optimal.
3.Then move on to the headline, tweaking that and nothing else.
4.Now, watch what happens. Optimize it in multiple iterations.
5.Then move on the intro copy, then the breadcrumb navigation, and so on.
Testing should be iterative, not just a â??one offâ?? in which you give it your best shot and youâ??re done. If youâ??re testing title tags, continue trying different things to see what works best. Shorten it; lengthen it; move words around; substitute words with synonyms. If all else fails, you can always put it back to the way it was.
When doing iterative testing, itâ??s good to do what you can to speed up the spidering and indexation so you donâ??t have to wait as long between iterations to see the impact.
You can do this by flowing more link gain (PageRank) to the pages you want to test. That means linking to them from higher up in the site tree, e.g., from the homepage. But, be sure to give it some time before forming your baseline.
Or, you can use the Google Sitemaps protocol to set a priority for each page — a priority of 1.0 to pages increases the frequency those pages will be spidered. (Note: Donâ??t make the mistake of setting ALL your pages to 1.0, otherwise none of your pages will be differentiated from each other in priority and thus none will get preferential treatment from Googlebot.)
Since geolocation and personalization mean that not everyone is seeing the same search results, you shouldnâ??t rely on rankings as your only bellwether as to what worked or didnâ??t work.
Other Useful SEO Metrics
Many other meaningful SEO metrics exist, too, including:
* traffic to the page,
* spider activity,
* search terms driving traffic per page,
* number and percentage of pages yielding search traffic,
* searchers delivered per search term,
* ratio of brand to nonbrand search terms,
* unique pages spidered,
* unique pages indexed,
* ratio of pages spidered to pages indexed,
* and many others.
(Some of these off-the-beaten-path metrics are the result of research conducted by Netconcepts for the study Chasing the Long Tail of Natural Search: How to Capture the Unbranded Keyword, published in 2006.).
But just having better metrics isnâ??t enough. An effective testing regimen also requires a platform conducive to performing rapid-fire iterative tests, where each test can be associated with reporting based on these new metrics. Such a platform especially comes in handy with experiments that are difficult to conduct under normal circumstances.
Testing a category name revision applied site-wide is harder than, say, testing a title tag revision applied to a single page. Specifically, consider a scenario where youâ??re asked to make a business case for changing the category name â??kitchen electricsâ?? to a more search engine optimal â??kitchen small appliances.â?? Conducting the test to quantify the value would require applying the change to every occurrence of â??kitchen electricsâ?? across the Web site. A tall order indeed, unless you can conduct the test as a simple search-and-replace operation, which is exactly what could be done by applying it through a proxy server platform.
By acting as a middleman between the Web server and the spider, proxy servers can facilitate useful tests that normally would be quite invasive on the e-commerce platform and time-intensive for the IT team to implement.
Start With a Hypothesis
A sound experiment always starts with a hypothesis. For example, if a page isnâ??t performing well in the engines and itâ??s an important product category, you might hypothesize that this product category isnâ??t performing well because itâ??s not well-linked from within your site. Or you may conclude that this page isnâ??t ranking well because itâ??s targeting unpopular keywords. Or even still, the page isnâ??t ranking well because it doesnâ??t have enough copy.
Once you have your hypothesis, you can set up a test to gauge its truth. Try these steps:
1. In the case of the first hypothesis, link to that page from the homepage and measure the impact.
2. Wait at least a few weeks for the impact of the test to be reflected in the rankings.
3. Then if the rankings donâ??t improve, formulate another hypothesis and conduct another test.
Granted, it can be a slow process if you have to wait a month for the impact of each test to be revealed, but in SEO, patience is a virtue. Happy testing!