The Myth of W3C Compliance? Some common misconceptions…

The Myth of W3C Compliance? Some common misconceptions…

W3C

I just received a pretty cool newsletter article this morning about a very touchy subject, especially among coders. While I am certainly not a web programmer and prefer to stick to making the pretty pictures and being the PR guy, I’ve managed to get myself involved in quite a few heated discussions about this. I figured I’d share this article, it’s a great read put together by Sasch Mayer, a very experienced technical writer for IceGiant.

The Myth of W3C Compliance?
By Sasch Mayer

The past few years have seen a huge increase in the number of search engine optimisers preaching about the vital importance of W3C Compliance as part of any effective web promotion effort. But is compliant code really the ‘Magic SEO Potion’ so many promoters make it out to be?

For those of you not familiar with the term; a W3C compliant web site is one which adheres to the coding standards laid down by the World Wide Web Consortium, an organisation comprising of over 400 members including all the major search engines and global corporations such as AT&T, HP and Toshiba amongst many others. Headed by Sir Timothy Berners-Lee, the inventor of the internet as we know it today, the W3C has been working to provide a set of standards designed to keep the web’s continuing evolution on a single, coherent track since the Consortium’s inception in 1994.

Whilst the W3C has been a fact of life on the web since this time, general industry awareness of the benchmarks set down by the Consortium has taken some time to filter through to all quarters. Indeed, it is only within the past 24 to 36 months that the term W3C Compliance has emerged from general obscurity to become a major buzzword in the web design and SEO industries.

Although personally, I have been a staunch supporter of the Consortium’s standards for a long time, I cannot help but feel that their importance has been somewhat overplayed by a certain faction within the SEO sector, who are praising code compliance as a ‘cure-all’ for poor search engine performance.

Is standards compliance really the universal panacea it is commonly claimed to be these days?

Let’s take a quick look at some of the arguments most commonly used by SEOs and web designers:

1. Browsers such as Firefox, Opera and Lynx will not display your pages properly.

Browser compatibility is possibly one of the most frequently cited reasons for standards compliance, with Firefox being the usual target for these claims. Speaking from personal experience, Firefox will usually display all but the most broken code with reasonable success. In fact, this browser’s main issue seems to lie more with its occasional failure to correctly interpret the exact onscreen position of layers (Div tags – this often causes text overlap) even when expressed correctly, than its inability to deal with broken code.

What about Lynx? Interestingly enough whilst it is somewhat more fragile than Firefox, most of the problems encountered by this text-only browser mostly seem to stem from improper content semantics (paragraphs out of sequence) than poor code structure.

2. Search engines will have problems indexing your site.

Some SEOs actively claim that search engine spiders have trouble indexing non-compliant web pages. Whilst, again speaking from personal experience, there is an element of truth to these claims; it is not the sheer number of errors which causes a search engine spider to have a ‘nervous breakdown’, but the type of error encountered. So long as the W3C Code Validator is able to parse (*) a page’s source code from top to bottom, a search engine will likely be able to index it and classify its content. On the whole, indexing problems arise when code errors specifically prevent a page from being parsed altogether, rather than non-critical errors which allow the process to continue.

* To parse is to process a file in order to extract the desired information. Linguistic parsing may recognise words and phrases or even speech patterns in textual content.

3. Disabled internet users will not be able to use your site.

The inevitable, but somewhat weak, counter-argument to this point is that only an infinitely small percentage of internet users are visually or aurally impaired. However, it is a fact that browsers such as Lynx and JAWS (no, not the shark) will view a web page’s code in much the same way as a search engine spider. From this perspective, we once again return to the difference between critical and non-critical W3C compliance errors. As long as whatever tool/browser/spider is used to extract text content from a page’s code is able to continue its allotted task, the user is likely to be able to view the page in a satisfactory manner.

Interestingly, one of my fellow designer/SEOs over in Japan has just run an experiment entitled “W3C Validation; Who cares?” testing the overall importance of W3C compliance to long-term web promotion efforts. Whilst the results of this, the world’s most non-compliant web page, do initially indicate that compliance does not make much of a difference to a search engine’s ability to index and classify a web page, I do rather suspect that further research may be needed in order to establish the long-term effects of this experiment.

At the time of writing however, the page ranks well with Google for the following two non-specific search terms; “Does Google care about validation” and “Google care validation” – not bad for a page which is supposed to be utterly and completely un-indexable. What then is the answer to the W3C compliance conundrum?

In conclusion I would say that ignoring the World Wide Web Consortium’s standards at this stage may well have negative consequences in the long-term, as the internet’s continuing evolution is likely to place greater emphasis on good coding practices in the future. Having said this, I would also say that the current value of W3C compliance has been overplayed by some professionals in the web design and SEO industries.

Further studies into the effects of non-compliance are certainly needed.

About The Author
Sasch Mayer, a writer with well over a decade’s experience in the technology and internet sectors, is currently living in Larnaca on the Cypriot south coast. He writes under contract to IceGiant, a web studio specialising in W3C compliant web design in Cyprus, the UK and the rest of the world.

Hope you enjoyed the article… please feel free to leave a comment 🙂

Dan

4 thoughts on “The Myth of W3C Compliance? Some common misconceptions…

  1. I’ve always thought that W3C compliance is overrated. It’s good practice to be sure, but a more important thing when building a site should be that it views correctly in all browsers.

Leave a Reply

Your email address will not be published. Required fields are marked *