Manual: Manage Crawl Sessions
The feature "Manage Crawl Sessions" of Visual SEO Studio, documented in detail.
Manage Crawl Sessions
A project can contain several Crawl Sessions, stored data resulting from the spider visiting web sites.
The table lists all the Crawl Sessions present within the active project.
From this window you will be able to administer the current project sessions: launch reports and analyses against their data, rename them, and delete them.
When you click on a session row, you will select it enabling to perform operations on it. You will also select it on the right side Session panel, which will give you all the current crawl session details.
The number of crawl sessions within the project.
Shortkeys to features
At the top of the window is a toolbar with several buttons to perform common tasks.
First of all, a button to start a new crawl session:
Start a new crawl session
Opens the Crawl a Site... options dialog; you can also open it from the Command Pad on the left side, or from the program main menu.
When a crawl session row is selected, you can launch a report or an analysis against its data.
It is quicker than using the left-side Command Pad - if there are more than a single session in the active project - because you won't have to specify the crawl session each time.
Opens the Crawl a Site... options dialog, using all crawl parameters (but the session name) equal to the ones used for the selected crawl session.
- Crawl View
- Folder View
- Tabular View
- HTML suggestions
- URL suggestions
- Images Inspector
- Links Inspector
- Performance suggestions
- hreflang Analysis
- GA suggestions
- Readability Analysis
- Data Extraction
- Custom Filters
- Create new Sitemap
- View session robots.txt
You can also perform some purely administrative tasks on the selected crawl session:
- Rename crawl session
- Remove crawl session
All these options are also available via context menu when you right-click on a table row.
Each crawl session is identified uniquely by an auto-assigned ID progressive number.
You can give your sessions an optional descriptive name. The name can be assigned when choosing the crawl parameters, or at a later time.
The address from where the spider started visiting the website. You will typically insert at the start of a new exploration the website Home Page, usually the "root" address. For explorations of list of URLs the field is not populated.
The type of exploration. The first times you will normally perform just explorations of type "Link Search", i.e. you will use the spider to explore a website starting from the root address and explore it by following all links found. With more advanced uses you might also audit XML Sitemaps, or explore lists of URLs.
The number of web pages visited during the crawl session. Just web pages, it does not count HTTP requests to images or other resources.
Date and time when the crawl session was launched.
Date and time when the crawl session completed. When the crawl session is still in progress, this field is not populated.
The time it took to complete the crawl session. When the crawl session is still in progress, this field is not populated.
The reason why the crawl session completed. Normally we expect it to be completed because all links found were visited, but it could have been stopped by the user, or it could have stopped due to other reasons.
The domain name of the Start URL. For explorations of list of URLs the field is not populated.