Visual SEO Studio Free Beta Accept-Language HTTP header
Load and Save Custom Filters
Custom Filters even more powerful
TTFB (Time To First Byte)
H1..H6 Tool window
Tons of improvements
Conclusion, and What's Next

 

With the new shortened development cycle - a new update every two-three weeks with small incremental steps - several enhancements appeared without many users knowing about them, so I decided to spend a few lines and screenshots to describe what's going on with the latest improvements in Visual SEO Studio.

Crawl options: added optional Accept-Language HTTP header

I've added an optional Accept-Language HTTP header in the crawl options to better emulate new Googlebot behaviour, after BigG recently announced it will occasionally use the Accept-Language HTTP header.

Crawl options: the new Accept-Language optional header
Crawl options: the new Accept-Language optional header

No much details have been given so far, so I decided to add only one language at the time (it can be a generic language as "en" or a country-specific one like "en-gb"). If evidence will be found of googlebot using multiple language codes in the header, I will modify the program accordingly.

This is just the first of new whole set of features dedicated to International SEO Auditing you will see in the future, so stay tuned!

Crawl options: full support for HTTP Authentication

SEOers often need to test a site "nearly ready" for prime time. To avoid it being indexed by search engines or seen by the general public before it's ready, the site is normally protected at OS level.

The software now supports all major HTTP Authentication schemes: Basic, Digest, NTLM and Kerberos, and you can optionally even use the network credentials of the currently logged-in user (e.g. useful for testing withing a local network)

Crawl options: support to all authentication schemes
Crawl options: support to all authentication schemes

I wanted to make it usable, credentials are asked to the user upfront before crawling to avoid having a scenario where you launch the spider, leave, return after a few hour just to discover it is stuck waiting for credentials because it found a password-protected area (I expect the common scenario to be the whole site password protected, but it could be only a part).

You can choose multiple authentication schemes, the safest will be attempted first.
Crawling a password protected site might take a little longer, because for each protected URL the spider first attempts an anonymous call, than an authenticated one upon a "401 Unauthorized" HTTP response. This prevents exposing credentials or credential tokens when not strictly necessary; in my tests it's much less a burden then I expected, probably thanks to having the HTTP connection already open.

Please note that while the authentication options are saved with all the others crawl options, so that can be used to pre-select options when using the "Crawl again..." helper, for security reasons no actual user/password credentials will be saved.

Yes, the dialog looks more complex than a simple user/password dialog. I could have simplified making the software try all the possible authentication schemes, basic included, but that would pose a security risk I don't want the users to be exposed to without prior warning, so I decided to propose safe defaults and try to better communicate in the UI what the various options are.
This is supposed to be an advanced configuration settings and the user to be aware of all the ins and outs of HTTP authentication.

Load and Save Custom Filters

Since 0.8.25 is is also possible to persist your custom filters. No more editing the same filters every time, they are saved in a dedicated database so that you'll be able to load them with a couple of click saving precious time.

Custom Filters can now be persisted
Custom Filters can now be persisted

Custom Filters: even more powerful

New operators were added to Custom Filters, the SEO-oriented query engine powering Visual SEO Studio.
You can now mine information as you never could before with any other tool.

Custom Filters: new operands and operators
Custom Filters: new operands and operators

You can now query crawled pages by TTFB, Fetch time, page size in bytes, page depth (i.e. link depth from the Home Page), and H1..H6 headings content.

As an example, you can search for all "pages having TTFB greater then 1200 ms, fetch time minor than 2000 ms, with more than one H1, with H2 containing 'widget', and reachable with less than 3 clicks from the Home Page".

Custom Filters is a great point of strength of the product, you'll see more here in the near future both in terms of expression power and usability.

TTFB (Time to First Byte)

Yes, in case you didn't notice, since ver. 0.8.25 Visual SEO Studio keeps track of TTFB (Time to First Byte), I believe it's the first SEO tool to actually do it. Now with 0.8.26 user can also create custom queries to filter on the crawled data set.

Visual SEO Studio at present tracks two important timings: TTFB and Fetch Time.

  • TTFB is the time span between the start of the HTTP request and the moment the spider receives the first content byte.
  • Fetch time is the time span between the start of the HTTP request and the moment the spider receives the last content byte.

the two timings are reported in milliseconds, and can both be queried using the Custom Filters query engine.

H1..H6 Tool window

Spotting all used H1..H6 headings used in a page takes no time: since version 0.8.25 you have a cool new tool window that tell you all about the page headings:

H1..H6 headings window
H1..H6 headings window

and since version 0.8.26 you can also filter pages based on your heading using the operators of the powerful Custom Filters query engine!

Tons of improvements

"Crawl Again..." option

A real time saver, you can used from both the context menu or the toolbar in the Manage Session window.
A "New Crawl" option dialog will appear with all crawl options (but the session name, to permit differentiating) already populated exactly as the ones used for the selected crawl session.

Crawl Again command
"Crawl Again..." command

The command works for Sitemap Audits crawl sessions.

Ignore path when crawling

The crawl options "spider trap path", the path to be ignored when crawling a site, is now available for all sites.
The option was until now available only for administered sites in case you decided to ignore robots.txt Disallow: directive, as it is a common practice to add a trap path to detect bot not respecting the Robots Exclusions Protocol.

Nevertheless, it was a common request to be able to segment crawl excluding some directories, and ignoring a path couldn't harm a site. It might be renamed and extended in the near future to accept more then one path.

Rename and delete projects

Long time required feature, since 0.8.24 you can finally administer your project file renaming or deleting them.


Load / Rename / Delete projects
Load / Rename / Delete projects

A lot of effort has been put to make it straightforward without losing the previously learnt user experience.

Rename and inspect crawl sessions

In the same manner, since 0.8.24 you can now rename your crawl sessions within a single project, to better help you organize your work

Renaming a Crawl Session
Renaming a Crawl Session

A handy right hand pane lets you inspect each crawl session without having to open it, and also reports all crawl options used to produce it.

Page Links and Headers: line and position

Page Link window with ver. 0.8.24 also exibits line and character position in line of the link within the HTML code. Similar information is also provided in the H1..H6 window.

Page Links new attributes: title, target, line and position
Page Links new attributes: title, target, line and position

This opens the possibility of nice future goodies which are under preparation.

Better description of HTTP Status codes

Since version 0.8.24, English description of HTTP Status Codes has been extended to cover all known used statuses, and better graphical representation is available for some special cases.

English description of HTTP Status extended to all known cases
English description of HTTP Status extended to all known cases

Performances

Not only the program now starts faster, since 0.8.24 the software highly reduced memory footprint in case of large crawl. There still is a lot of space of improvement here which is currently addressed in development, so stay tuned.

...and yet another thing

Those highlighted here are only the most prominent changes of the last three releases; many other minor changes, fixes, and UI improvements were added.
For a full and boring list, please consult the official Release Notes.

Conclusions

As I said, I've shortened the release cycle, and development is working at full steam toward a complete 1.0 version. 
There are many areas where the product needs to improve and I'm confident you will appreciate the progress.
A special thanks goes to all private beta tester whose feedback has been invaluable, and to the users who reported issues or gave precious suggestions, much appreciated!

The Free Beta version is stable and with no limitation, no registration required, so don't waste time:
click on the download button, install it - it takes no time - and start auditing!