I validate every page I publish against the W3C Validator. Some of you may call me on that, if I’ve overlooked any pages please do. Many of the developers I’ve worked with hold the position that if the page works, and all devices render it as expected, validation doesn’t matter and is unnecessary. These are usually the same developers that (at the time) religiously publish with an XHTML doctype without ever extending their pages, not even once. Below are why I consider (X)HTML validation important, and why I continue to do it.
It’s important to note that when validating code, there is a huge difference between validation errors and validation warnings. For example, an unnecessary warning about a role attribute on a <nav> element is not going to bring down the house.* If I have a lot of warnings, it tells me I should look closer at why I’m generating warnings, see "habits" below.
- It’s not (just) about quality and ego. Some of my directives are in line with the W3C’s reasons for validating, some are not. That it’s "a sign of professionalism" and "eases maintenance" are not good arguments IMO. The first is rooted in ego, and as for the second, many poorly formatted (X)HTML pages, or pages that are output by automated processes validate perfectly but the maintenance can still be a nightmare. A couple others I agree with strongly as described below.
- Debugging. This one is the top of my list, for two reasons.
- Layout and Cross Browser Issues. If you don’t validate your pages for any other reason, make it this one. It’s not as big an issue these days now that most browsers are W3C compliant, but a stray or missing end tag, improperly nested elements, missing end quotes, and other source code oversights can make your development cycle a nightmare. Works in this browser or operating system, breaks in that. Can’t get an element to position no matter how sure you are you’ve correctly assigned the CSS. It only takes a quick run through the validator to show your oversights (or in my case, show me I’m not as smart as I thought I was.)
- Javascript Anomalies. I use the browser’s developer control panel to detect Javascript errors, but often I find I’m looking in the wrong place. It’s not the Javascript, it’s something astray in the DOM. Consider the common oversight in this simple example:
View the Sample Code
and see it in action here.
If we "expect" the second link to trigger an alert, it doesn’t, the first one does. This is because in traversing the DOM, it finds the first instance of ID my-alert-anchor and attaches the behavior to that instance. It never gets to the second. A quick run through the validator will tell you that two elements with the same ID is invalid HTML. Javascript won’t tell you anything is wrong, and sometimes it might even work. But a developer maintaining your code will likely be confused and may curse your name when they find out this is the problem. My first stop when JS isn’t working correctly is to check the output itself against the validator.
- Browser expectations are (more) predictable. Back in "the day," getting your pages to look even close to what you expect in all browsers was a real challenge. It’s not as bad today because most browsers adhere to standards, but the rule still holds true: make sure your pages will validate and they should render close to the same in all browsers and all platforms. In a recent company I worked with, they had a terrible struggle with cross-browser compatibility due to a pervasive front end framework that often required invalid code and invalid nesting of elements. Their solution to support? "Use browser XYZ." This is not an acceptable choice IMO.
- Good for SEO. People put a lot of effort into SEO best practices, submitting to search engines, keyword research and implementation, content management, but all that goes to waste if large chunks of your code are "fine" in your browser but disappear into the void when a search engine reads them. Some search engines will drop pages from the index if (X)HTML errors are encountered. Search engines are machines. They don’t forgive oversights the way browsers do. They won’t parse invalid elements the way a browser does, and in the old days we had to be very cautious about Quirks Mode, which brings me to the next reason I validate.
The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect.
- Helps develop good habits. W3C calls it "best practices," but practices are really habits. When I fix the same silly oversight every time I validate, I stop coding that way. Like anything else done in repetition, when I identify the problem and get in the habit of not doing it …. I build good habits, which means I’m working faster and more efficiently.
- Trains me for mobile and WAGC/WAI. You may or may not remember a nationwide retailer getting into a large law suit a few years ago because their web site was not accessible to impaired users – users who are blind or due to disability cannot use a pointing device. While I do not agree with the litigious society we’re in, the inset quote reflects my views on published web pages. Your content should be accessible to all users regardless of disability, and we don’t get to tell them they need to upgrade their systems or use a different browser. Getting our clients’ content to everyone regardless of disability – even if you consider that disability to be Internet Explorer or Edge – is our job. See Make Sure Pages are Accessible for more information on how I prepare pages for web accessibility.
The great news is, if you’re in the habit of validating your pages, you’re almost already there. It only takes some additional media selectors to make your site responsive and mobile-friendly, and a few attributes to make it WCAG compliant.
These are the main reasons I validate my code. Clients can rest assured the product is future proofed, stable, and reliable.