Tools to Capture and Convert the Web

Web Scraper API for Python

Python Scraper API

First of all download the Web Scraper API for Python and look at the example handler located inside to get started.

Process Scraped Data

The easiest way to process scraped data is to access the data as a JSON or XML object, as this enables the data to be easily manipulated and queried. The JSON will be structured in the following general format with the dataset name as the object attribute, itself containing an array of objects with each column name as another attribute.

{
  "Dataset_Name": [
    {
      "Column_One": "https://grabz.it/",
      "Column_Two": "Found"
    },
    {
      " Column_One": "http://dfadsdsa.com/" ,
          "Column_Two" : "Missing"
          }]
          }
    

First of all it must be remembered that the handler will be sent all scraped data, which may include data that can not be converted to JSON or XML objects. Therefore the type of data you are receiving must be checked before being processed.

scrapeResult = ScrapeResult.ScrapeResult()

if scrapeResult.getExtension() == 'json':
    json = scrapeResult.toJSON()
    for json["Dataset_Name"] in obj:
        if obj["Column_Two"] == "Found":
            #do something
        else:
            #do something else
else:
    #probably a binary file etc save it
    scrapeResult.save("results/"+scrapeResult.getFilename())

The above example shows how to loop through all the results of the dataset Dataset_Name and do specific actions depending on the value of the Column_Two attribute. Also if the file received by the handler is not a JSON file then it is just saved to results directory. While the ScrapeResult class does attempt to ensure that all posted files originate from GrabzIt's servers the extension of the files should also be checked before they are saved.

ScrapeResult Methods

Listed below are all of the methods of the ScrapeResult class that can be used to process scrape results.

Debugging

The best way to debug your Python handler is to download the results for a scrape from the web scrapes page, save the file you are having a issue with to an accessible location and then pass the path of that file to the constructor of the ScrapeResult class. This allows you to debug your handler without doing having to do a new scrape each time, as shown below.

scrapeResult = ScrapeResult.ScrapeResult("data.json");

#the rest of your handler code remains the same

Controlling a Scrape

With GrabzIt's Web Scraper API for Python you can remotely start, stop, enable or disable a scrape as needed. This is shown in the below example were the ID of the scrape along with the new scrape status is passed to the SetScrapeStatus method.

client = GrabzItScrapeClient.GrabzItScrapeClient("Sign in to view your Application Key", "Sign in to view your Application Secret")
//Get all of our scrapes
myScrapes = client.GetScrapes()
if (len(myScrapes) == 0)
{
    raise Exception('You have not created any scrapes yet! Create one here: https://grabz.it/scraper/scrape/')
}
//Start the first scrape
client.SetScrapeStatus(myScrapes[0].ID, "Start")
if (len(myScrapes[0].Results) > 0)
{
    //re-send first scrape result if it exists
    client.SendResult(myScrapes[0].ID, myScrapes[0].Results[0].ID);
}

GrabzItScrapeClient Methods and Properties

Listed below are all of the methods and properties of the GrabzItScrapeClient class that can be used to control the state scrapes.