Tools to Capture and Convert the Web

How to download a website and all its content?

Website

Situations exist when it is important to download an entire website, not just the finished result. But all the HTML of web pages, resources such as CSS, scripts and images.

This maybe because you want a backup of the code but can no longer get to the original source for some reason. Or perhaps you want a detailed record of how a website has changed over time.

Fortunately GrabzIt’s Web Scraper can achieve this by crawling over all the web pages on a website. To scrape all pages from a website. Then on each web page the scraper downloads the HTML along with any resources referenced on the page.

Create a Scrape to Download an Entire Website

To make downloading your website as easy as possible GrabzIt provides a scrape template.

To get started load this template.

Enter your Target URL, the system automatically checks this URL for errors. Keep the Automatically Start Scrape checkbox ticked, and your scrape will automatically start.

Customizing your Scrape

To alter the template, uncheck the Automatically Start Scrape checkbox. One alteration would be to run the scrape on a regular schedule.

For instance, to create regular copies of a website. On the Schedule Scrape tab, simply click the Repeat Scrape checkbox and then select how frequently you want the scrape to repeat. Then click the Update button to start the scrape.

Using your Downloaded Website

Once the web scraping has finished you can download it as a ZIP file. Unzip the file to find a folder named Files containing all downloaded web pages and website resources. There will also be a special HTML page called data.html in the root of the directory. Open this file in a web browser and you will find a HTML table with three columns:

This file helps you map the new filenames to their old locations. This is necessary because a URL cannot directly map to a file structure! As a URL can exceed the size limit for direct storage in the file path.

Also there can be many permutations of a URL. Especially when a web page can represent a lot of different content by changing various query string parameters! So instead we store the website in a flat structure in the file folder and give you data.html file to map these files to the original structure.

Of course because of this you can't open a downloaded HTML page and expect to see the web page you saw on the web. To do this you would need to rewrite the paths of the image, script and CSS resources etc found in the HTML file. So that the web browser can find them in your local file structure.You need to do this to view a website offline.

In the root of the ZIP file there is another file called Website.csv. This contains exactly the same information as the data.html file.

However, you can use this in case you want to read and process the website download programmatically. Perhaps using the mapping from the URL’s to the files to recreate the downloaded website.