Release Notes for version 184.108.40.206
Product release notes detail every single modification made on the release. Find out what changed with Visual SEO Studio version 220.127.116.11
Visual SEO Studio 18.104.22.168
Published: Thursday, June 11, 2020
This feature release introduces a state of the art image compressor and makes easier to audit external broken links.
For more details about release 2.2 "Lena" please read: Visual SEO 2.2 optimize images.
Image optimization tool.
Images are often the heavy loads preventing to achieve the desired performances. Visual SEO Studio provides now a powerful tool to optimize image size directly within the program.
The "Optimize Image" tool permits to save a lighter version of the image by permitting you to reduce pixel dimensions, image format and quality level, and permits you to visually check the result image to see if maintains an acceptable perceived quality.
Supported formats for the original image are all those currently supported by the SEO spider:
BMP, TIFF, JPEG, GIF, PNG and WebP.
The "Optimize Image" tool can be invoked from Images Inspector via new context menu item "Optimize image..." available in tab "IMG tag list" main grid, in tab "Distinct images" grid, and in all tree image views.
Full support for WebP image format (previously it was crawled and stored, but the viewer was not able to show WebP images).
WebP images crawled and stored with Visual SEO Studio versions from 1.9.8 to 2.1.1 can now be displayed and their pixel size recognized, but some reports will not show their pixel size because that should have been ispected at crawl time. New crawl sessions will of course not have problems and pixel size of WebP images will appear on all reports handling them.
Note: in the Mac version the WebP format only supported on systems with macOS 10.14 or newer, otherwise the program will did as before: crawl WebP images, read their byte size, store them locally, but will be unable to read their pixel size and to display them.
New crawl option: "Crawl external links", enabled by default.
The option is available for normal crawl sessions started using the "Crawl a site..." command.
Tells whether the spider should also visit external URLs found in internal links.
Only the linked external pages will be visited, the spider will not go deeper. The main purpose of the option is enabling you to find broken external links.
Redirected URLs will be followed. External pages will not be taken into account in most analysis reports (Custom Filters is the exception).
Usability / UX:
- Create Sitemap window: when user attempted to export an empty Sitemap, the shown message box used to have an Information caption and icon; now it uses a Warning caption and icon.
- Right-side "Content" panel: now in case of image resource, it also shows the image type in the bottom fields. It also works for embedded "Data URI" images.
- Crawl URL List: now if an URL is an image, other than visited the image content is also stored.
- Performance Suggestions: robots.txt pages are now skipped from inspection.
- Crawling, increased number of common image extension recognized.
- Images Inspector, tab "Distinct images": added "Content Type" column.
- Images Inspector, main grid: added "Content Type" column.
- Images Inspector: new context menu item "Optimize image..." in tab "IMG tag list" main grid.
- Image Inspector, new context menu item "Optimize image..." in tab "Distinct images" grid.
- Image Inspector, new context menu item "Optimize image..." in tree image views.
- Uniformed some translations across different parts of the program.
- Added nQuant, Magick.NET and ImageMagick to credits document.
- Crawling: fixed a corner-case issue (detected in only a single real-world instance) a latency delay in saving the initial crawl session record caused all reports that needed to bulk inspect the pages HTML content to discard the first few crawled pages. Thanks to M.M. for reporting the case and sharing with us the project database.
- Links Inspector / Page Links: fixed a fake positive detection as "Sidebar" link type, caused by a CSS "no-sidebar" class found during DOM traversal.
- [Mac] Fixed a crashing condition on Catalina that occurred when saving a file overwriting an already existing one (should fix also the reported issues about exporting to Excel/CSV).
- [Mac] Crawl URL List, fixed a crash that occurred when importing a CSV file with empty lines (OSE format, for example).
- [Mac] Refresh tab controls after enabling them (in most of the site analysis tabs).
- Typos fixed (various languages).
No registration required