"Free website check", the first trap
The problem with single-page analyzers
The problem with SEO scores
The problem with SEO check lists and tools based on them
23 issues to be debated
The problems with tools in general
The most important SEO tool ever
A long time misunderstanding
Conclusions

 

"Free website check", the first trap

It often works like this: you know very little of Search Engine Optimization, but you know you want to optimize your website.

So you google for keywords like [free seo checker], [check seo score free], [free seo check online], or some other variation of it, and you land on a page asking your website URL.

You insert the Home Page address, and after a few seconds you get a full list of what supposedly is good or bad. Sometimes there's also a cool "SEO score", whatever it might mean.

The list of what is supposedly wrong is sometimes impressive, you might be scared by it, and more often then not you are invited to leave your details to get a free quotation for a human made consultation.

Other times you just stumble upon a popular article titled "The 10/20/.../50 most common SEO mistakes" or something similar.

Sometimes they are not errors at all, other times it really depends on the situation. You need context to judge.

The purpose of this article is helping people with very little SEO knowledge survive that first step, and possibly save money, time and bad experiences. Another goal is providing SEO beginners the knowledge to overcome their previously acquired dogmas and grow further in the trade.

The problem with single-page analyzers

Would you ever buy a house after just having looked at its facade?
It might say a lot, but if you want to buy a house you are supposed to visit it, to enquire about yearly expenses, to extraordinary expenses to be expected, to know the works that were done, to know about the foundation, the terrain, neighborhood, sun exposure, distances from the center, transportation available, and so many other things.

Those single-page analyzers just look at the home page, the facade.

Not only that, they look at details that sometimes could have very little to do with the real web site value in terms of SEO (especially when taken alone), performance, design, communication, usability, user experience... but we'll talk about those details later.

The problem with SEO scores

Having a "SEO score" is cool, right? Can you imagine being able to show off a "97/100 SEO score" on social media?
The problem is SEO scores are meaningless.

No single-dimensional number could describe how a website performs in terms of search engine optimization.

First, it's like trying to describe evaluate the quality of a person with a numeric value. Say it's a number from 0 to 10. Suppose you say it's a 7.
What on earth does it mean? A man? Or is she a woman? Fair aired or brunette (and which should be best?). Is she a good person? A scientist, an athlete? If an athlete, is she a sprinter, or a marathoner, or a weight lifter?

OK, suppose we are evaluating more qualities with distinct scores, and then making and average. The final average says nothing of the values to compute it (unless it's zero or 100). Taken alone it means nothing.
And there is not only SEO. We might craft a website with a perfect "SEO score" (whatever that means), but with no usability, awful user experience, painfully slow, that doesn't convert at all. Got the point?

The problem with SEO check lists, and tools based on them

Poor SEO tools are everywhere, but what normally makes them bad is not the programmer's programming skill, it's his/her lack of technical SEO knowledge.
The programmer reads a 101 SEO guide, normally coming back from the '90s and already back then full of wrong concepts, and tries to make a tool upon it.
Other programmers try to do a clone based on the same wrong concepts, and now there are more tools saying the same wrong things.
Years pass, and the tools gets staler and staler, but they are cheap tools so everyone use them, and users mutually enforce their beliefs because the tools said so.

23 issues to be debated

What follows is a partial list of commonly found "SEO issues" we think should be discussed.
Some are real issues but their impact can be negligible, some are not even errors, in all cases we believe SEOers should be able to understand all the whys and hows about them.

SEO Error? sitemap.xml not found

There are two major problems with this test.

First, nowhere is written a XML Sitemap should be named sitemap.xml; it could be any name, any extension, a sitemap could even be located elsewhere in some case.
Using /sitemap.xml is just a common convention, nothing sculpted into stone and in fact often it is not what happens.
So not having a /sitemap.xml file does not mean anything.

May be you may want to rephrase the issue as "Not having a XML Sitemap", would you?
Well, is that a SEO mistake?

The hard truth is that 99.9% of the websites do not gain any SEO benefit from having a XML Sitemap.

If your website is just a few tens/hundreds of pages, and they don't change often (take a blog for example, with only new content added), more often than not a XML Sitemap listing all your pages will not bring any advantage.

XML Sitemaps main purpose is mostly discoverability, and search engines can usually find all website content already by simply following internal links. That's the primary mechanism of content discovery for search engines.
The sitemap does not add any benefit in such cases.

A sitemap could help when content is not reachable via internal linking, but you had better fix your navigation then.

XML Sitemaps have also secondary purposes, each of them rarely important:

  • Suggesting the search engine the canonical URL for a page (i.e. the preferred URL to show in SERP).
    Using a consistent internal linking and the canonical tag is a better way to deal with the issue.
  • Specify a preferred crawl frequency for your pages (changefreq tag)
    Truth is you don't have a crystal ball to know when and how often you'll change your pages content, and the value is likely ignored by search engines.
  • Specify a crawl priority for your pages (priority tag)
    It could be a point, but a good link structure should ensure it already, and you do need a good link structure anyway.
  • Specify the date your pages were last modified (lastmod tag)
    Likely the only one worth considering. An HTTP header is often a better way to communicate instead of the lastmod XML tag. Yes, a search engine could read the lastmod value upfront before deciding to actually crawl the resource. The problem is far too many sitemaps are generated with bogus lastmod values, a search engine will soon understand the resource didn't in fact change and lose trust on the site XML sitemap. If you use it, make sure it is accurate.

So, when are XML Sitemaps really useful?

Huge websites with tens of thousands, hundred of thousands, or millions of URLs - many of them having content changing frequently - those are the websites that can make the best use of XML Sitemap protocol.
E-commerce, announcements, real-estate offers, ...all this class of websites with pages going stale and other pages needing updates, where you cannot just temporarily link from the home page a changed page URL to have its re-indexing prioritized, simply because there are far too many of them. That is the class of websites we are talking about.

Suppose you have a huge e-commerce site with thousands and thousands of product pages. You decide to change the price of a product line, and need a few tens of product pages to be indexed again so that the search engine has as soon as possible an updated view of them. You build a dedicated/temporary sitemap with those product pages only, and submit it to the search engine.

This is just one of many possible examples.
Not only you are telling the search engine the exact list of pages you want to be reindexed, you could have a better understanding of their indexing status via Google Search Console sitemaps page.

As the producers of an SEO tool, we added a XML Sitemap generator in Visual SEO Studio with all these concepts in mind: so unlike other sitemap generators we made it possible to generate a sitemap containing only a selection of web pages.

Guess what? The feature - to our own surprise - has been one of the most appreciated ever since, mostly by people working on e-commerce sites.

A XML Sitemap generated in real-time by the e-commerce platform should be preferable (because it would be always up-to-date), yet most platforms lack the ability to make a dedicated sitemap with only a selection of pages, or do a poor job on it. Hence, site owners found more convenient getting a license of Visual SEO Studio Professional Edition for a small cost and save tonnes of money nurturing their e-commerce websites at best.

Note: another case where a temporary sitemap would help is in case of site migration, with a XML Sitemap listing all old URLs leading to HTTP 301 redirects so that the search engine would discover them more quickly without considering the new URLs discovered via link crawling as duplicate content.

SEO Error? sitemap.xml not indicated in robots.txt

There might be a point here (if we rephrase it as "XML Sitemap not indicated in robots.txt", since we cannot assume the sitemap name). We ourselves actually do suggest listing your sitemaps (not a single one containing the whole site page list) in robots.txt Sitemap: directives.
The reason is that there is not only one search engine, listing them there you avoid the need to have a webmaster account for every search engine and submit your XML Sitemaps on each of them.
Yet the advantage is very limited, often negligible.
Suppose you only organic traffic from Google only (in the majority of countries it has 97% of market share or more), you might and should have anyway listed your XML Sitemaps in Google Search Console without the need to list them in robots.txt
Add the negligible advantage of having a XML Sitemap in most cases we discussed in the previous point, and you can understand how considering it a mistake a too big a word.
If your site is one of the few really needing XML Sitemaps, it's likely that your sitemaps list changes often and if you only care about Google you might find easier to administer it only via Google Search Console rather than changing each time the robots.txt file.

SEO Error? robots.txt not found

A missing robots.txt file is perfectly valid according to Robots Exclusion Protocol, it just means that user-agents are not restricted from crawling the website.
If that's what you want, how can it be considered an error?

Most of the times site owners do not restrict much in the robots.txt file; majority of WP based websites use a cookie-cutter version of robots.txt which content could even safely be omitted.

Again, websites really needing a robots.txt file are complex websites, largely similar cases of those having benefits from using XML Sitemaps.
For those sites, being able to inhibit to mostly well-behaved search engine bots is a practical and little expensive way to reduce crawl budget consumption.

We normally prefer using a robots.txt file, but for a large majority of websites not having one is not an issue.

Note: when Visual SEO Studio does not find a robots.txt file it reports it with a "warning", which is not meant as a "light error", but as something to be looked at to see if it is accidental or wanted: robots.txt files tend to accidentally "disappear", especially in some corporate work environments with large web development teams...

SEO Error? Using a robots.txt

If some check lists report not having a robots.txt file as an error, there are others which report as en error not having it!
Here the justification behind is even weaker: they say that if you use the Disallow: directive (which is the main purpose of robots.txt files) you may betray the folder name of private website areas.
Private area should always be password protected, that's the first and most important protection. There are cases where adding their folder to a Disallow directive makes sense, when a direct link to the private area could be found in a public page, but then one could find them anyway.

SEO Error? Uppercase letters in URL

There are plenty of tools and guides saying that page URLs should always be all in lower case letters.
This is just a wide spread convention, but not respecting is not a mistake. HTTP protocol and URL specifications permit ASCII characters in the URL, no matter if they are upper case or lower case.
So, that's not a mistake.

Why are some tools reporting it as an issue, and why it is a convention?
The early webmasters were mostly coding HTML pages manually, CMS (Content Management Systems) did not exist or were at in their infancy. Links were coded manually and an easy way not mistake the URL casing was to keep it all lower-case. Using another convention would have been valid anyway: all uppercase (but that makes URLs look ugly), or any other combination.
Today there still are many occasions were link URLs are input manually even in WP sites so the convention could still make sense. What does not make any sense if that some people believe some upper case character in the URLs could make them penalized!
The real rule for URLs characters in links should be: they have to be correct, matching exactly the URL they are meant to match.

SEO Error? Underscores in URL (instead of hyphens)

This one is to be considered indeed a minor SEO mistake, at least for Google:
historically Google considered underscore characters not as word separators. The reason is in the early days many searchers where programmers searching for programming constant names, which used underscores. So not considering underscores as word separators was an optimization for a common-case scenario.

To the best of our knowledge, this is valid even today.

So why including it in this list?
Because we feel the seriousness of the issue is too often overstated.

  • Many websites (wikipedia.org the most prominent) fare pretty well even with underscores in their URLs.
  • Fixing it would mean not only rewriting the URLs, but you should also set a HTTP 301 redirect from the old to the new URLs; wait for the search engine to index the new URLs, learn somehow that the newly discovered URLs are actually the old ones by recrawling the old ones and hitting the 301s (a temporary sitemap here could help, as already explained in the Sitemaps section).

Often webmaster consider the effort not worth the benefit, and adopt the rule of thumb of not using underscores in newly created URLs, and leaving the old ones as they are.

A wannabe SEO should know why the rule exists, that its impact is often negligible, and all it takes to fix the issue if they really want to.

SEO Error? Percent-encoded characters in URL

Also stated as "Unicode characters in URL", or "Diacritics in URLs", this is widely - and wrongly - supposed to be a mistake.

Let's explain a little better the issue:
Since URL specifications only accept ASCII characters, early URL address only used characters from the western alphabet. A common convention - still largely in use - was for languages with diacritics to use the equivalent letters without diacritical sign, for example "e" instead of "è".
For languages using non-western alphabets webmasters were usually forced to use English language for URLs; for some language like Russian were used transliteration conventions to the western alphabet. None of the solutions were satisfactory for users, and also prevented search engines getting a context from URLs for many languages.

Later some innovations were introduced: in order to have diacritics or other Unicode characters in their path they have to be percent-encoded before going through the wire. For domain names (IDN, Internationalized Domain Names), Unicode character must be transformed with another encoding called punycode.

Today's browsers do the transformation for the most part transparently, so the users would see an address - for example - in Cyrillic without seeing all those % signs followed by character pairs that would make the URL unreadable.
All major Search Engines are able to understand Unicode characters in URLs as well.

Unfortunately, far too many wannabe SEOers still think we are in the early '90s.

To recap: there nothing wrong using Unicode characters in URLs, and if that helps users especially for non-western languages you should use them.

We at aStonish only recommend not using spaces (percent-encoded with the %20 sequence) in URLs, because URLs are often copied in e-mails, e-mail clients try to render them as links, but a space in the address would make them believe the URL is truncated earlier.

SEO Error? Low text-to-HTML ratio

Several programs report text-to-HTML ratio below a certain threshold as errors.
There's not a magic ratio below which one gets penalized. The piece of information should be taken with a grain of salt.

There are well ranked and badly ranked web pages with any range of ratios.
Search engines care very little about how much HTML there is in your pages compared to the amount of text. In fact, before evaluating a page they try their best to eliminate boiler plate elements, locate the main content, and extract the plain text from it.

Let me explain why the parameter exists:
It is an easy way for SEO tool producers to try locate thin content pages.
The reasoning is that when a page has no or very little content, its text-to-HTML ratio will be low.

It is a little like the canaries used in the trenches in WWI to detect poison gas. If a canary dies, it could be because of a gas attack. Or it could be for some other reason.

An additional indication given by the measure is suggesting when the website template has complex HTML structure, making the page on average heavier and slower to render.

SEO Error? Low word count

I bet you read in many places several theories about the perfect number of words a page should have to better position itself in SERP.
300 words, at least 300 words, 500 words, 1500 words, 1850 words, 3000 words... many announcements looking for copywriting cite one of those figures as a requirement.

Again, there is not an exact number of words to position a page. It largely depends on the keyword / key-phrase. If it is a completely made-up word not used anywhere else, you could write a single word article with it and position first (and only) in SERP. When it's a more difficult search term things change, but there is not a magic number. One should write the "right" number of words whatever that is, i.e. shouldn't bother much about the exact number of words, and use as many words as she needs to express the concepts.

There are several reasons a SEO tool would exhibit a "word count" report:

  • to help editors enforce internal policies based on word-count (whether they are based on solid reasons or not is not important, editors often ask for it)
  • like in the text-to-HTML case to locate thin content pages. It's easier to count the number of words in the whole page and then listing the pages with the lowest number rather than trying to first prune all boiler plate (Note: Visual SEO Studio "Readability" feature has also a word count metric, but it first removes all boiler plate content and is thus better suited to spot thin content pages).
  • locate hacked pages with hidden content

SEO Error? Multiple title tag or meta description

Yes, a page with multiple title tag or meta description is indeed a SEO mistake, but what's the impact?
If the two title tags are identical, there are no consequences: the search engine only picks the first one. Ditto for the meta description.
Another matter is when the first is empty, or the two have different content.
So: it is a SEO mistake and should be fixed. In some cases it could be harmless.

SEO Error? Multiple H1 headings

Let me say it straight: having multiple H1 headers in a single page is not a mistake.
It has never been an HTML validation error, even before HTML5. HTML5 even encourages it.

The "One H1 per page" rule is only a handy SEO convention. Since H1...H6 tags are hierarchical, it makes sense using a single H1 as the document title in the visible page content (the title tag has similar goal, but is not visible in the rendered page content).

What happen if more H1 tags are used, under a SEO perspective?
Nothing dramatic: the weight given associated to the H1 tag is split equally among the H1 tags used; the weight of the H2...H6 tags is scaled accordingly.

There are instances where the error is involuntary: for example some badly-crafted site templates wrap boiler-plate elements (logo, text in side elements...) with H1s or other H* headings. That should definitively be fixed and the user should have a way to know it.

In aStonish we still suggest using a single H1 per page for practical reasons, without claiming it to be a SEO mistake not doing it.
Our SEO tool Visual SEO Studio in the "HTML Suggestions" report does list pages with more H1 tags; it does not say it is an error, it reports it as a Warning (i.e. something the user might want to know and check) because being the common rule so wide-spread and accepted it could be the multiple occurrence were involuntary.

SEO Error? title and H1 are equal

Some tools and check lists report as a SEO mistake having identical title and H1 tags.
The reasoning is: by using the same text, you are missing the chance to position for different terms or synonyms. It makes some sense.

We must point out that the page title tag commonly contains also the site brand name at its tail, so a simple strict comparison would not work.
But is the fact they match really a problem?

Here in aStonish we built in the past two distinct bespoke CMS systems (yes, there are occasions when they make sense, mostly to integrate with legacy code). In those cases we decided to automate title generation by concatenating the (mandatory, in our systems) H1 title content with the site brand name. This is a solution common to many other CMS systems.
Not only that: we forced generated the meta description with the same content displayed in a page subtitle prominent in the page.
We still are adopting the same situation with the visual-seo.com website.

Why did we chose that?
Title and meta description have a drawback: they are not normally visible to the site human editors (or at least prominent to the eye, it depends on the tool they use).
With time pages are maintained, titles change, text changes, content may also change semantically. And those title and meta descriptions could become stale and no longer reflect what the web page is all about. Binding them to a visual text portion prevents that.

Persons in charge of editing content rarely have a SEO mindset, simply put to them title and meta description are something the CMS ask them to fill by they do not see a reason why since the page look the same. So they left them blank. May be you explained what they mean to some of them, but they forgot it in no time, may be never really understood the importance, and the word wasn't passed to the new team members.

We believe composing the first part of the title tag with the H1 content is a solution that outbalances the minor advantage of having two differently crafted texts, since it guarantees that the tag is always populated and aligned to the page mission.

SEO Error? Images missing ALT attribute

Not having ALT attribute populated with a descriptive text normally is a SEO mistake.

However, few know that, according to HTML specifications, ALT attribute should be left empty for irrelevant images.
"Irrelevant" images are images added as decorative elements without adding informational value.

Only the author can normally decide if a image is decorative or informative. A report listing the images missing the ALT attribute is a precious ally.

Note: keep in mind the ALT attribute main purpose is assisting visually impaired users accessing your content with a text-to-speech device.
SEO should be always seen as a secondary goal.

SEO Error? Redirect chains

It's true: multiple redirects increase the number of HTTP requests made to get the destination URL, thus increasing the overall time and worsen user experience.
Ideally, redirect chains should be removed and you should only use only a single redirect to the final destination.

Redirections have the further SEO disadvantage of procrastinating exploration by googlebot because the search crawler does not follow directly the redirection, it sends the information to Google which will schedule an exploration of the newly discovered URL (if it is a new URL it never met before).

But is it all that bad?

Googlebot can follow up to five redirects in a chain. Browsers have even higher tolerance.
Plus, permanent redirections are cachable, that means that once a browser is served a HTTP 301 redirect it will remember the destination URL and next time the user will click on the same link the browser will skip the intermediate step.
Browser remember (permanent) redirections, and so search engines do.

A typical scenario is when you want to migrate a site, say from HTTP to HTTPS. And you also decided to change from the www. version to a naked URL.
You could set up a single redirection rule to do it in one shot (something we do recommend), but you find it easier to copy rules you find on the Internet you are sure they work and paste them in your .htaccess file.
Are you shooting yourself in the foot? At worst the migration would take a little longer, but if you can live with it it is not a tragedy. After some time your migration will be completed anyway.
There are also other single URLs redirected for normal page maintenance. Even if you set up a rule to get the HTTP->HTTPS and www.->naked URL passage in one single shot, the specialized redirection for the page will be chained if the user clicks on an old external link she found on some other site. You cannot always prevent the existence of redirect chains as it would not be practical in many cases.
Yet the chains do work the same.

To recap: do minimize redirect chains, and don't worry too much if you cannot avoid a limited amount of chaining.

SEO Error? No W3C validity

Non validated HTML is not a SEO error per-se.
Search Engines - simply put - do not care about it. Google will never penalize your site just because its HTML is not fully validated.

So, why you see this condition ever mentioned in so many check lists?
It could be ignorance, it could be that the check list authors simply reported it after reading it in some other list.

Is it there some real motivation to justify it?

Partially: search engines need to parse HTML in order to extract content text, title, meta tags, etc...
A very badly formed HTML document could prevent the search engine HTML parser to accomplish this basic task.

If you have a perfectly validated HTML document you are sure nothing will break Google parser, but it's like building a 300 meters tall wall to guard your borders when a much smaller one would do.

Probably the vast majority of web pages do not pass a complete W3C validation test. Yet search engines can parse them alright.
You really need bad HTML coding in some critical points to prevent Google and other search engines to understand your content.

So, having fully W3C validated web pages is mostly a waste of time, and often largely impractical to get (it depends on the CMS, on the theme, the plugins used...)
Ensure the your HTML is not too bad - and most CMS systems accomplish this minimal task - and you rarely have to worry.

Note: aStonish's product Visual SEO Studio comes with an integrated HTML validator.
It does not perform a full W3C compliance validation, yet we believe it to be a much better choice:

  • it only reports most serious HTML validation errors that could impair your SEO (e.g. a badly formed title tag)
  • it is blazing fast!

SEO Error? title length less than XXX characters

You will find all over the Internet recommendations of keeping title tags shorter than 71 characters, 70 characters, 65 characters, 55 characters, and so on.

Let me put it straight:
Char-based title length in Google SERP has been one of the biggest, widely spread myths in SEO industry since years, unconsciously perpetrated by several SEO software vendors and SEO influencers, mutually endorsing and reinforcing their belief. Bona-fida granted, mind you.

The real measure to look at is the title width measured in pixel.

I'm not smarter others, just I come from a programming background where it is well understood that if you need to fit a text in a box using a non-monospaced font you have to measure its size using low level APIs offered by the operating system GDI. Browsers in the end are GUI apps like the others, they are bound to the same rules.

The letters "m" and "i" do not use the same real estate space (when using a non-monospaced font, like the one used in Google SERP). You can fit many more "i"s then "m"s in the same horizontal space.

Even Google in its old (now dismissed) "HTML improvements" report in Google Search Console when reporting titles too long (i.e. title which appeared in SERP truncated with a trailing ... ellipsis sign) never mentioned a char-based threshold.
Of course, there was no char-based threshold, the only thing that mattered was whether the width of the title text rendered with the used font exceeded or not the with in pixel of the containing DIV element box.

It has always worked this way, at least for Google. Some so-called influencers pretended Google changed approach; so-called influencer are always ready to spread myths, much less to admit their ignorance.

Now, not everyone in the SEO industry was oblivious of the fact pixel width was what to be taken into account, yet many SEO specialist adopted policies based on character length for luck of better tools; SEO tool builder continued to provide char-based measure for ignorance, or for lack of technical knowledge on how to compute pixel based width.
Many recommendations like "less than 55 characters" are based on statistical data, where a char-based length was taken to safely assume the text would not be truncated when shown in SERP.

It is a bit draconian, since it flags as error many well crafted title that would not actually be truncated.

There is even more to be added:
Normally title tags are composed - this is a wide spread common convention that we recommend too - by the actual page title followed by a separator followed by the site brand name.
Now, do you really care if the brand name part is truncated when SERP user can read the meaningful part of the title without problems?

It has also been demonstrated that even if parts of the title are truncated in SERP, their actual content is considered by Google part of the title tag (which receives a weight when evaluated) up to 12 words. So even if truncated some titles could actually be a good mean to give prominence to secondary keywords. What matters most is that the most important part of the title is readable.

Note:
Visual SEO Studio - the tool produced by aStonish - has been the first SEO tool to ever predilige pixel-based width for titles and meta descriptions. We subsequently also introduced a char-based metric upon insistence of agency clients who were still anchored to the old way, but we do not recommend it and do not give it evidence.
It also is probably the only tool to provide a report based on the number of words composing the title, in accordance to the 12 words rule.

SEO Error? meta description length less than YYY characters

Like what already explained for the title tag, the measure that really matters is the size in pixel of the meta description content snippet.

In the meta description case estimating whether its text will be truncated is more subject to error because search engines often dedicate some of the available space to pictures, stars, or other elements that might be triggered by page structured data. The used search query term could also be highlighted in bold taking a bit more space (but when it happens, it also not uncommon for Google to chose another text fragment to show in the SERP snipped in place of the meta description).

Most of the time, a char-based metric make very little sense and you should use a pixel-based metric that provides a more precise result.
Again, Visual SEO Studio has been the first SEO tool to provide - since its first early public beta versions - metrics based on the size in pixels, years before other vendors.

SEO Error? Use of hyphens in the domain name

There is no SEO penalty for using hyphens in domain name.
And those who use them well know they do, it makes little sense to point that out.

Why should one pick a domain name with an hyphen?
Take visual-seo.com case for example. The version with no hyphens was not available. Not used, but retained by domain squatters keen to make money offering it at an outrageous price. No thank you.
We wanted a domain matching our product brand name, and decided for the hyphened version because it does not harm.

There are other sensitive reasons why one would prefer and hyphened domain name.
Think about the quantity of unintentionally inappropriate domain names out there (e.g. whorepresents.com, expertsexchange.com, speedofart.com and so many others).
Sometimes a pair review and a simple hyphen could really spare you a lot of ridicule and embarrassment!

A better report would be analyzing non-hyphened domain names for ambiguous meanings.

SEO Error? Not using Dublin Core

Some analyzers you can find on the web report this error message: "This page does not take advantage of Dublin Core."

This makes me laugh even more then an unintentional embarrassing domain name:
Dublin Core has nothing to do with SEO, it is not used by major search engines and to the best of my knowledge never did.
(Check for example the page "Meta tags that Google understands", Dublin Core is nowhere to be found).

What is worst is that there are so-called influencer still spreading in 2019 wrong SEO tactics inviting to add Dublin Core markup to your page code!

But what is Dublin Core in the first place?

Dublin Core was a markup proposal to decorate HTML with metadata in order to describe entities for the semantic web. It has never been used as far as I know by search engines, and its scope is vastly superseded by schema.org and microdata.

And why you can find various resources indicating it as a SEO technique?

That's what I wanted to discover.
I found out that a Lithuanian developer put on a marketplace a PHP script which analyzes a single web page.
It goes through a check list of disputable SEO recommendations (my bet the author is not a SEO, he probably just read some checklist on the Internet) and also integrates with Google Page Speed API.

It does very few things and poorly, inspecting a single page. But it presents results in a nice way and scares users for the amount of supposed mistakes it finds.

The script has known some popularity because it has a very low price, it is multilingual and easy to customize and integrate on a website.
There are many (wannabe?) web and SEO agencies that integrated it as a free tool for their users as a bait to offer them afterward a paid consultancy.

To recap:

  • Dublin Core is not used by search engines.
  • Adding its markup on your pages is a complete waste of time.
  • If you have energies to add meta data, a much, much better usage of your time and resources would be adding schema.org structured data recognized by Google!

SEO Error? [to have / not to have] meta keywords tag

There are sadly tools reporting one or the other.

Whoever wants to do SEO should have clear in mind one thing: the keywords meta tag is completely ignored by search engine.
Google never used it, other may have done it in the past. Bing once said they might look into it searching for keyword stuffing signals, but do not use it for ranking.

So using it does NOT help SEO.
And using does NOT impair SEO.

Are there any justifiable reason to one of the two contrasting indication or the other?

In aStonish we generally invite not to populate the keywords meta tag both because it is useless, and because - if you do your keyword research properly - it would mean giving away for free to your competitors your hard work. It's not rocket science to inspect a text for prominent search terms, but listing them in clear in an easy to access field makes no sense!

When it could make sense to use it?
Keeping in mind that the field is ignored by search engines, a human editors might want to have a field in their CMS where to save their target keywords for reference, and may be being able to use internal tools to attempt assessing how much the text copy relates to the reference keywords. In such cases, they might find convenient using the meta keyword fields provided by the used CMS simply because it's there, may be not even knowing it would end hard coded in the HTML pages or not bothering.
When years ago we at aStonish built a bespoke CMS for a big customer (thy needed to integrate a lot of legacy code and using a commercial CMS wasn't feasible) we added to the visual editor a Keywords field exactly for that reason.
The only practical difference was that is did not fill a meta keywords in the HTML, on purpose! We also provided tools to analyze the text and see if responded to search intents listed in the Keywords fields (rough, but no, it was much much better then the absurd so-called "keyword density"!), but I digress...

SEO Error? Keyword density above xx.x%

It is a very sad thing that in 2019 we still have to remind wannabe SEOers that the so-called "keyword density" (KD) is completely meaningless.
It is not used by search engines as a parameters, and has never, ever, ever been.

To make it clearer: search engines do not work like this. Even classic textbook Information Retrieval theory never mentioned KD because it does in no way model anything meaningful. KD does not take into account far too many things: stemming, synonyms, frequency of usage in language statistics, and it is a very poor mathematical model on its own.

We though by this time it was a well established thing after so many years spent trying to educate SEO professionals. Yet there still are around SEO tools - some even popular - that add a "keyword density" report for their users, thus enforcing the belief KD were a thing.

Is there any reason why a tool would provide a KD metrics, today?

I'm trying to play the devil's advocate now. Some may consider KD a tool to detect spammed content, and it is very easy to compute.
Too bad that all its shortcomings as a potential ranking factor are there again as a spam detector: for example for English language you would find that the word "the" has a very high KD (some try to work it around skipping "stop words", but KD remains a very very poor tool anyway).

Are there other metrics that I could trust?

Information Retrieval provides other formulas that stand on a much more solid theoretical and mathematical ground. A partially popular one is TF-IDF, which itself if far from ideal since it does not take into account again things like stemming, synonyms, and language stats. Like all its most known variants, it is a formula from the '70s, almost half a century old.
There are other formulas attempting to improve things, but there is not a perfect solution for a SEO tool, because most of them are little understandable to the average user, or require heavy computation which would make a tool unusable.
Search engines like Google are far ahead on Information Retrieval then what is taught in Universities. Their inner working is mostly a trade secret, we know they they leverage huge entity graphs to search, something a relatively economical SEO tool cannot afford to offer.

We at aStonish always utterly refused to add KD computation within Visual SEO Studio, and will continue to do so.
But, we do not exclude for the future to equip Visual SEO Studio (or another future product) with a more reliable way to analyze text to match a given search term.

SEO Error? Keyword not in meta description

Meta description is not used for ranking (at least for Google).
So adding your keyword in the meta description does not give you a ranking boost.

One benefit could be that Google tends to highlight in bold the used search term when it is found in the search snippet, so adding it in the meta description tag could help raising the CTR (click through rate, i.e. the probability a user would click on your SERP result).

Google is also able to recognize entities and synonyms, and sometimes it highlights them in the search snippet; whether they are highlighted or not it could be better anyway adding synonyms to tell the user the resource is really what is looking for.
We do recommend thus to ignore this rule and craft a meta description that could maximize CTR.

SEO Error? Not having FB/Twitter/.. scripts/pixels

More often than not, social media scripts have the effect to slow down web pages, so they are on the contrary detrimental to your page SEO.
Performances aside, their presence does not influence the SEO of a page.

Our recommendation:
Use them if you need them, and try your best to minimize the impact they have on page speed.

The problems with tools in general

There are things that traditional tools simply cannot do. Take SEO tools for example. You might feed them the address of a website, and they may find there are no broken links, no titles duplicated, and so on... Then you have a look at the website and it looks like a site back from the '90s, with comics sans font and flashing text and animated GIFs. And unusable nested dropdown menus. And the text is poorly worded, or maybe slightly offensive to the reader.
Hint: before examining a website with a tool, always have a quick manual inspection. Always!

The most important SEO tool ever

The most important SEO tool is human brain. Full stop.

You might argue that adding the proper amount of Artificial Intelligence a tool could indeed be able to evaluate also that stuff.
Well, it could be. But let me guess: weren't you looking for a free of a low budget tool? And you want regular updates and technical assistance, right? I bet you don't want to spend tens of thousand of dollars a year just for that!

Besides, even if such AI-powered tool were possible today, and for little or no cost... how do you think a client would ever buy your SEO consultancy if she could use such tool herself?
Luckily for SEO consultants, web agencies, and the whole band - and unlikely for website owners - such tool does not exist yet and will not exist for quite some time.

When you do an SEO consultancy, you are not selling an automatically generated report. You are selling your know-how on how to interpret data exposed by tools. You are selling your experience on how things should be improved. And that experience does not come from a simple check-list, and often is not a pre-packaged fit-all-sizes answer: each client has its own technical and budget constrains.
May be they tightly depend on a CMS/e-commerce platform that does not permit them to set redirect, or to edit structured data, or fix a bad faceted navigation. So you have to know how to work around the limitations with canonical URLs, robots.txt rules, etc... you cannot stop at the check list.

As a SEO tool producer, with Visual SEO Studio we are continually striving to make a better balance between the amount of details to provide, the ease of use of the product, and so on.

A long time misunderstanding

While writing this article I realize there is a long time misunderstanding between SEO tools producers and SEO users.

Creating a software that emulates human brain is terribly hard. So programmers build something less sofisticate, but users assume things differently.

Take for example the problem of thin content. Suppose the desired feature is a report to list all thin content pages.

The proper way to do it would be parse the HTML of each page, remove all the boiler plate parts (it's not trivial building a tool that always recognizes it reliably), then extract the page main content pure text, count the words, decide if they are too little (not always possible without context)...
This is something similar to what the "Readability" feature of Visual SEO Studio in good part does (among many other things), yet it is not so simple to do it.

So old tool producers resorted to hacks:

  • instead of extracting the pure text of the main content only extract it all - menus and footers and all the rest included - and order by word count in ascending order.
    The software cannot know if the first results actually are thin content: the count could be low because the template is light, or cold be high because the template is heavy, it does not know which words belong to the main content.
    It's up the the users to check.
    But tool users assumes that word count is a ranking factor.
  • or instead they compute the whole text-to-HTML ratio, because they assume that thin content page would have low ratio, and show a list in ascending order.
    The software still does not know if the first results actually are thin content.
    It's up to the users to check.
    But tool users assumes that text-to-HTML ratio is a ranking factor.

Conclusions

Doing SEO professionally is not just following a "one size fits it all" check list, or forwarding a report generated by a SEO tool without human intervention.
Professionals should dig deeper and know what is the reason behind each common SEO recommendation, understand if they still are valid, when they make sense for their case, when they make sense at all, and be able to adapt to limitation imposed by technology, budget and time.

I hope with this article to have given the intellectual tools to improve beginners' understanding and professionality.

This is by no mean an exhaustive list, I wanted to concentrate only on the more debatable things, occasionally treating also some hard to die SEO myths.

Do you think I omitted something worth discussing and wish it to be added? Speak up!