Ask questions relating to GrabzIt's Web Scraper Tool. Such as how to use the web scraper and API to extract data from web pages, images or PDF documents.
Hello! I have tested the tool and it works exactly the way I want. However I have exeeded my scrape page limit.
I want to purchase more but is there any way I can work out how many scrapes I need to be able to download my website?
I am working with a time limit as the website is scheduled to be taken down on the 14th of october. In other words, I need all the scrape pages to be available now and cannot wait for the scrape limit to be reset.
Hi Richard, I am glad its what you want. The number of pages you need depends on the size of the website. I would get enough to cover it with say 10% more. If it looks like it is running out you can always upgrade again and the allowance will go up on the scrape. Remeber that if you are taking screenshots of a web page you will also need a matching number of captures on the web page. So if you want 5000 captures then you will need a entry package etc.
Hope this helps.
Thank you =)
I am NOT technical. Is there any way of finding out how large my website is? Are we talking MB or number of pages?
I am using the template "Convert a website into linked PDF documents". Does this count as captures or something else?
I can't really answer that as it depends on how the website was made but looking at the sitemap might give you a clue e.g: https://www.google.com/sitemap.xml
Its the number of pages and as you are also using the capture API both the scrape pages and the number of captures should match.
I have now purchased a package to increase both the scraper and capture limit. 😊👍
I then clicked "activate" on my current scrape task but it doesn't seem to have reactivated. Do I need to remove it and create a new one? I was under the impression that I could get this one to continue somehow but I can't work it out.
Thanks for upgrading. It looks like you are already recreating the scrape.
I'm not sure if it's running. It says that the status is "Idle" and that 0 pages are processed. I re-activated yesterday evening so expect that something should have happened by now.
Sorry, the enable/disable function just stops scrapes starting that are on a schedule. So if you could open the scrape and then press save again it should start a fresh scrape run.
Thank you! Now it's up and running again 😊