Friday, April 6, 2012

Top 3 Strategies for SEO After Google Panda Update

Google has changed more in the past year than it did in the 12 years prior to that. Most of the changes are good for honest marketers who just want the ranking their content deserves. But taken together, they radically change search engine marketing (SEM) best practices. James Mathewson (author of Audience, Relevance, and Search: Targeting Web Audiences with Relevant Content), won’t go into every change, because they number in the dozens, but this article discusses three changes every SEM should care about.



As some of you may know, I am IBM’s representative to the Google Tech Council. For those who don’t know, the council is a place where representatives from the leading B2B tech companies sit around a table each quarter and discuss our search challenges with Google representatives. B2B Tech companies might have different ways of doing search marketing, but our challenges are common. We all need to rank well in Google for the words and user intents most relevant to our clients and prospects.

Google does its best to programmatically help us solve our challenges. They can’t always help for legal reasons I won’t get into. But where they can, they do. For example, a couple of years ago, several of us complained about the alarming increase in content farms on the search engine results pages. Whether we had organic or paid listings on those pages, content farms caused serious friction for our target audience, and diluted the results we had legitimately earned or paid for.

After that pivotal meeting (though perhaps not because of the meeting), Google began working on ground-breaking changes to its algorithm that would tend to improve the quality of search engine results incrementally over time. The first set of changes was launched in March 2011. Of course, I’m referring to Panda.

Nine months and several Panda updates later, I can confidently say that Google does a much better job with the quality of its search results. I rarely if ever see content farms anymore, and those I do see don’t last long on page 1. Those who think of SEO the way it was primarily conducted prior to these changes—keyword stuffing, buying links on content farms, and participating in commodity link exchange trading—have been left behind.

Panda is perhaps the most profound change to Google’s search engine since PageRank, which was the technology that gave Google its edge. Ironically, it was overdependence on PageRank that led to the series of algorithm changes known collectively as Panda. The practice of spoofing PageRank by swapping or buying links from low-quality sites had grown to such an extent that the results were polluted by them.

Towards an Algorithm that Rewards Quality Content
The Tech Council was not the only place where Google was hearing that it needed to change. Google’s chief competitor—Bing—had taken some of its share, to the point where Google only owned something like 70 percent of the market, down from 80 percent at its peak. The quality of the results had something to do with this.

The problem Google faced is that it had made regular changes to its algorithm over the years to stay one step ahead of the scammers, spammers, and scrapers. It had even introduced continuous A/B testing that gave pages better results if users actually engaged with them. That approach had reached its limits. The A/B tests were simply not getting rid of the pages fast enough. Scammers, spammers, and especially scrapers could publish pages faster than Google could drop them in the rankings. Google needed to undercut these activities once and for all.

How do you change the algorithm again to reward authentic, high quality content and punish low-quality spam-riddled content from scrapers? The answer was a revolutionary way of building an algorithm: with UX/editorial crowd sourcing combined with machine learning. According to Rand Fishkin, founder of SEOMoz, Google hired hundreds of quality raters—primarily editors and UX specialists—to rate a massive number of pages on the web. It then put the ratings into a machine-learning program, which recognized patterns and built the algorithm organically.

Machine learning is a technique borrowed from artificial intelligence used to enhance the analysis of complex systems such as natural language. When a computer system is said to learn in this way, it is taught to recognize complex patterns and make intelligent decisions based on the data. Watson, for example, used machine learning to train for the television game show Jeopardy!. By studying the questions and answers of past games, and practicing in live sessions with past champions, Watson was able to learn the nuances of the game well enough to play above past championship levels.

There are hundreds of patterns or signals that the Panda machine learning program recognizes in how the quality testers rate pages. Notice I used the present tense because this is an ongoing process. Google releases a new version of Panda every two months or so that reranks the entire web based on a new weighting of patterns and signals the machine learning program learns, all stemming from feedback from the quality testers.

Though the algorithm is changing in subtle ways all the time, the general trend it to favor the following three areas of digital excellence, described in the next few sections.

1. Design and UX
It is no longer advisable to build text-heavy experiences, which force users to do a lot of work just to ingest and understand the content. Clear, elegant designs that help users achieve their top tasks will tend to be rewarded by Panda. There are dozens of books and sites on user experience (UX) best practices, so I won’t rehash them here. But these principles of good UX won’t lead you astray:

NOTE

Gerry McGovern, Jerrod Spool, and Jakob Nielsen are three of the leading thinkers in web UX.

  • Keep it simple. The whole experience needs to be easy to use. One thing Panda does is put new emphasis on the UX of entire sites, not just one page within the site. Do you help users navigate once they come to the page? Or does your experience drive your users in circles? Can they get back home if they click your links?
  • Don’t make users work. A page needs to have most of the content “above the fold” or on the first screen view. Don’t make users scroll too much. Don’t make them click just to see more of the content you want them to see.
  • Clarify. A page needs to clearly communicate what it’s about at a glance. You have six to eight seconds to give quality testers a clear idea of what the page is about, who it’s for, and what users can do on it.
  • Don’t shout. You already have their attention. The temptation is to make it so blatantly clear that you use huge text and flashy graphics. Don’t insult users’ intelligence. Just elegantly clarify what the page is about.
  • Don’t hide stuff. The temptation for some designers is to be too elegant, forcing users to mouse over items to make them appear. If users don’t know it’s there, chances are they won’t mouse over it.
  • Emphasize interaction. Sites and pages are not passively consumed. Give users ways to interact and participate in the conversations at the core of the content.
  • Answer user questions. Ask yourself what questions users might have when they come to your page. Learn these questions by analyzing the grammar of search queries they used to find your content. More and more all the time, users are phrasing their queries in the form of questions. Answer these questions clearly and concisely.
These and many other design and UX best practices are some of the strongest signals the Panda machine learning algorithm looks for.

2. Content Quality
One of the main complaints I hear about SEO is from my editorial colleagues who say it’s just a way of helping poor-quality content climb the ranking at the expense of good-quality content. Their complaint has some validity. Content quality can’t be boiled down to a simple checklist of where to put keywords. So why are these factors so important in search engine results?

The answer is, they’re not as important anymore. Panda does not primarily reward traditional SEO best practices. Panda primarily rewards clear, concise, compelling, and original content. Only if two sites are of equal quality in Panda’s eyes will it tend to reward the one that displays traditional SEO best practices. But it is easy to overdo SEO practices.

For example, keywords are strong indicators of relevance to user queries. As paragon users, quality testers look for the words they typed in their queries when they land on a test page. If they’re not clearly emphasized on a page, the page will not tend to get a good score. So having well emphasized keywords above the fold is an important positive pattern for Panda.

But having a conspicuous number of the same keyword over and over is the sign of bad quality content. So that’s a negative pattern for Panda. Matt Cutts, Google’s organic quality czar, advises page owners to read the copy aloud. If it sounds natural, it should be fine.

The point is, the rules of good quality content are much more important than any simplistic set of SEO rules for pages. Do traditional SEO practices matter? Yes, because they are patterns Panda cares about. But really good content that does not have keywords in every alt attribute or great backlinks will still tend to rank better than marginal content that has all of the attributes of traditional SEO.

Panda also tends to reward fresh content and punish duplicate content. All things being equal, a piece of content will rank higher if it is more recently published. (So pay attention to the date metatag.) One of the signals it looks for is not at the page level but at the site level. A high quality page that sits in a site full of old duplicate junk will not rank well until you clean the junk out.

In short, Panda rewards good content strategy. Content strategists such as Colleen Jones don’t trust SEO “snake oil.” And rightly so. Good SEO is good content strategy and vice versa.

Finally, one of the dilemmas that I hope gets permanently retired with this article is the false dichotomy between writing for search engines and writing for users. I argued in the book I co-authored for IBM Press that they were essentially the same. The attributes users care about are much the same as the ones search engines care about. And we can use the intelligence we glean from search engines as a proxy for intelligence about our users. Prior to Panda, this was controversial. Post Panda, it is not controversial.

The algorithm is derived from user preferences. The only reason why you need a machine to learn those preferences and do the work is because of the sheer volume of pages and sites on the web. The machine is not quite human, but it is getting closer all the time to human intelligence. And it has something that no individual human has: It has the collective intelligence of the whole crowd of quality testers. Like Watson, it is smarter than any individual human because it combines the intelligence of the collective of people feeding it data.

3. Site Metrics
As I mentioned, Google has long rewarded search engine results with high click-through rates, low bounce rates, and high engagement rates by helping them climb the rankings. But Panda rewards these even further by making search excellence metrics strong signals in each update. It also continues to raise the level of sophistication of these metrics signals, where they tend to align with pages rated highly by the quality testers.

For example, Google’s A/B testing didn’t have different standards for different types of experiences. It used a relative standard based on the bounce rates for the words in question. As a result, certain types of experiences for a given keyword tended to rank better over time (ahem, Wikipedia). Yet it makes perfect sense that portals have different bounce rates than single-offer commerce experiences. The more options users can click, the lower the bounce rate, generally speaking. Because humans understand the nuances of different experiences, Panda tends to contextualize these variable metrics values. And it will tune how it weights them over time as the quality testers provide more data.

Another example in metrics sophistication is in the levels of engagements. If a user clicks through to a page from the search engine results page, and clicks three more times, it will count for more than if she just clicked once. More generally, if a site has a high number of engagements per user, it will tend to rank better over time than one with one page that converts well and a bunch of dead pages.

Perhaps the best news of all in this is that you can improve your search rankings just by making incremental improvements to the pages on your site based on the metrics you gather.

Unfortunately, all the changes Panda makes only happen every two months or so. So once you are pushed down in the rankings by Panda, it will take a while to get back up in the rankings. Hopefully, Google will be able to make more frequent updates to Panda in the future so that those penalized by an overaggressive ad executive or an inadvertent UX faux pas can get back into Panda’s good graces more quickly.

Conclusion
Unlike past Google algorithm changes, Panda itself is not changing in any drastic way. It is just getting smarter at recognizing high-quality digital experiences. It’s also getting smarter at recognizing poor quality experiences that look good from a simplistic point of view. If you want to rank well for Google, you will need to invest in building high quality, authentic digital experiences. Given the growing confidence in the Google algorithm, it is a business imperative.
Written By: James Mathewson Source: informit.com

No comments:

Post a Comment

Comments only will be approved after admin review so keep writing about the topic, Unless it will mark as SPAM.