Release Notes for version 2.0.1.7

Product release notes detail every single modification made on the release. Find out what changed with Visual SEO Studio version 2.0.1.7

Visual SEO Studio 2.0.1.7

Published: Saturday, January 4, 2020

This a maintenance release. Happy 2020!

New Features:

  1. Tabular View, Crawl View and Folder View: in case of redirected page, right clicking on the page row/node you have two new context menu options: "Go to Redirection URL" (which selects the row/node page of the URL pointed by the Location HTTP header, if available) and "Browse Redirection URL".

Usability / UX:

  1. Uniformed notation for "Start URL" text (all languages).
  2. Changed translation for "Start page" (IT only).
  3. [Win] Options: adjusted all options forms to give them a consistent layout.
  4. [Mac] Options: Google API credentials, position of delete popup message is now screen centered.

Performances:

  1. Crawling: reduced pressure on crawl queue in case of multi-threaded crawl, avoiding cases where some URLs were queued more than once only to be discarded afterward.
  2. [Win] SQLite updated to the latest release (stabler and more performant).
  3. [Mac] Avoid unnecessary paintings of the tab control.

Various:

  1. [Mac] Crawl session panel: in case of URL list, the Start URL field was shown empty, while in the Win version it showed the "(URL list)" text.
    Now it has been aligned to the Windows version behaviour.

Fixes:

  1. When used for verifying web sites, GSC authentication did not complete due to changes on Google implementations.
  2. Function "Find in page" did skip all non-crawled pages. While this did make sense in most cases (searching for title, description, in-text and in-html), one could have desired to search also non-crawled URLs using a part of the URL. It is now possible.
  3. Crawling: some URLs could have been crawled more than once when discovered by concurrent threads nearly at the same time if they were discovered via HTTP redirection.
  4. Crawling: due to some corner-case concurrency issues some non-crawled items were occasionally stored more than once (and showed up in Tabular View).
  5. Crawling: some URLs could have been crawled more than once when discovered by concurrent threads nearly at the same time.
  6. Crawling, workaround: some silly ASP.NET sites added a 302 redirection to /robots.txt?AspxAutoDetectCookieSupport=1 in order to test for cookies support and generate session id querystring url parameters instead, thus preventing exploration (unless one had verified the site and could ask to ignore robots.txt).
    Now we treat a redirection to the very same robots.txt with any querystring parameter as we do with the option "For robots.txt file, treat a redirection to [other domain]/robots.txt as 'full allow'" (which is selected by default).
    Note that we are ignoring the robots.txt here, but we are somehow doing it with a rationale that the server is anyway doing something out of the robots exclusion standard.
  7. Verified Sites: GSC verification did not add properties verified with domain verification. Now it adds all four possible properties (with/without www., and http/https, combined).
  8. URLs linked from A tags that looked like normal pages, but were redirected to other resources (e.g. images) were not treated as such.
    This rare corner-case led to unpleasant side effect, e.g. in Performance Suggestions they were treated as html pages with no scripts, braking for example histograms data, etc...).
  9. [Win] Page Links bottom pane had a - hidden by default - unused "Symbol" column, now removed.
  10. [Mac] Fixed minor UI glitches: when hiding the tab control headers (like in the Command Pad) also hide the tab scrollers.
  11. [Mac] Avoid potential crashes (never occurred to real users) when painting the UI.
  12. [Mac] Setup: minimum macOS version 10.9 enforced (it wrongly asked 10.7 as mimimum).