Spider Web Scraping



Scrapy is a popular open-source Python framework for writing scalable web scrapers. In this tutorial, we’ll take you step by step through using Scrapy to gather a list of Oscar-winning movies from Wikipedia.

Web scraping often goes hand in hand with web crawling. A Crawler (also known as a robot, spider, bot, etc.) is a program that indexes websites by following links. From a set of initial URLs, it requests each one of it and takes note of every hyperlink contained in the response. Then it follows the hyperlinks only to repeat the process. Spider is a smart point-and-click web scraping tool. With Spider, you can turn websites into organized data, download it as JSON or spreadsheet. There's no coding experience or configuration time involved, simply open the chrome extension and start clicking. The custom extraction feature allows you to scrape any data from the HTML of a web page using CSSPath, XPath and regex. The extraction is performed on the static HTML returned from URLs crawled by the SEO Spider, which return a 200 ‘OK’ response. You can switch to JavaScript rendering mode to extract data from the rendered HTML.

Web scraping is a way to grab data from websites without needing access to APIs or the website’s database. You only need access to the site’s data — as long as your browser can access the data, you will be able to scrape it.

Realistically, most of the time you could just go through a website manually and grab the data ‘by hand’ using copy and paste, but in a lot of cases that would take you many hours of manual work, which could end up costing you a lot more than the data is worth, especially if you’ve hired someone to do the task for you. Why hire someone to work at 1–2 minutes per query when you can get a program to perform a query automatically every few seconds?

For example, let’s say that you wish to compile a list of the Oscar winners for best picture, along with their director, starring actors, release date, and run time. Using Google, you can see there are several sites that will list these movies by name, and maybe some additional information, but generally you’ll have to follow through with links to capture all the information you want.

Obviously, it would be impractical and time-consuming to go through every link from 1927 through to today and manually try to find the information through each page. With web scraping, we just need to find a website with pages that have all this information, and then point our program in the right direction with the right instructions.

In this tutorial, we will use Wikipedia as our website as it contains all the information we need and then use Scrapy on Python as a tool to scrape our information.

A few caveats before we begin:

Data scraping involves increasing the server load for the site that you’re scraping, which means a higher cost for the companies hosting the site and a lower quality experience for other users of that site. The quality of the server that is running the website, the amount of data you’re trying to obtain, and the rate at which you’re sending requests to the server will moderate the effect you have on the server. Keeping this in mind, we need to make sure that we stick to a few rules.

Most sites also have a file called robots.txt in their main directory. This file sets out rules for what directories sites do not want scrapers to access. A website’s Terms & Conditions page will usually let you know what their policy on data scraping is. For example, IMDB’s conditions page has the following clause:

Robots and Screen Scraping: You may not use data mining, robots, screen scraping, or similar data gathering and extraction tools on this site, except with our express-written consent as noted below.

Before we try to obtain a website’s data we should always check out the website’s terms and robots.txt to make sure we are obtaining legal data. When building our scrapers, we also need to make sure that we do not overwhelm a server with requests that it can’t handle.

Luckily, many websites recognize the need for users to obtain data, and they make the data available through APIs. If these are available, it’s usually a much easier experience to obtain data through the API than through scraping.

Wikipedia allows data scraping, as long as the bots aren’t going ‘way too fast’, as specified in their robots.txt. They also provide downloadable datasets so people can process the data on their own machines. If we go too fast, the servers will automatically block our IP, so we’ll implement timers in order to keep within their rules.

Getting Started, Installing Relevant Libraries Using Pip

First of all, to start off, let’s install Scrapy.

Windows

Install the latest version of Python from https://www.python.org/downloads/windows/

Note:Windows users will also need Microsoft Visual C++ 14.0, which you can grab from “Microsoft Visual C++ Build Tools” over here.

You’ll also want to make sure you have the latest version of pip.

In cmd.exe, type in:

This will install Scrapy and all the dependencies automatically.

Linux

First you’ll want to install all the dependencies:

In Terminal, enter:

Once that’s all installed, just type in:

To make sure pip is updated, and then:

And it’s all done.

Mac

First you’ll need to make sure you have a c-compiler on your system. In Terminal, enter:

After that, install homebrew from https://brew.sh/.

Update your PATH variable so that homebrew packages are used before system packages:

Install Python:

And then make sure everything is updated:

After that’s done, just install Scrapy using pip:

>

Overview Of Scrapy, How The Pieces Fit Together, Parsers, Spiders, Etc

Spider Web Scraping

You will be writing a script called a ‘Spider’ for Scrapy to run, but don’t worry, Scrapy spiders aren’t scary at all despite their name. The only similarity Scrapy spiders and real spiders have are that they like to crawl on the web.

Inside the spider is a class that you define that tells Scrapy what to do. For example, where to start crawling, the types of requests it makes, how to follow links on pages, and how it parses data. You can even add custom functions to process data as well, before outputting back into a file.

Writing Your First Spider, Write A Simple Spider To Allow For Hands-on Learning

To start our first spider, we need to first create a Scrapy project. To do this, enter this into your command line:

This will create a folder with your project.

We’ll start with a basic spider. The following code is to be entered into a python script. Open a new python script in /oscars/spiders and name it oscars_spider.py

We’ll import Scrapy.

We then start defining our Spider class. First, we set the name and then the domains that the spider is allowed to scrape. Finally, we tell the spider where to start scraping from.

Next, we need a function which will capture the information that we want. For now, we’ll just grab the page title. We use CSS to find the tag which carries the title text, and then we extract it. Finally, we return the information back to Scrapy to be logged or written to a file.

Now save the code in /oscars/spiders/oscars_spider.py

To run this spider, simply go to your command line and type:

You should see an output like this:

Congratulations, you’ve built your first basic Scrapy scraper!

Full code:

Obviously, we want it to do a little bit more, so let’s look into how to use Scrapy to parse data.

First, let’s get familiar with the Scrapy shell. The Scrapy shell can help you test your code to make sure that Scrapy is grabbing the data you want.

To access the shell, enter this into your command line:

This will basically open the page that you’ve directed it to and it will let you run single lines of code. For example, you can view the raw HTML of the page by typing in:

Or open the page in your default browser by typing in:

Our goal here is to find the code that contains the information that we want. For now, let’s try to grab the movie title names only.

The easiest way to find the code we need is by opening the page in our browser and inspecting the code. In this example, I am using Chrome DevTools. Just right-click on any movie title and select ‘inspect’:

As you can see, the Oscar winners have a yellow background while the nominees have a plain background. There’s also a link to the article about the movie title, and the links for movies end in film). Now that we know this, we can use a CSS selector to grab the data. In the Scrapy shell, type in:

As you can see, you now have a list of all the Oscar Best Picture Winners!

Going back to our main goal, we want a list of the Oscar winners for best picture, along with their director, starring actors, release date, and run time. To do this, we need Scrapy to grab data from each of those movie pages.

We’ll have to rewrite a few things and add a new function, but don’t worry, it’s pretty straightforward.

We’ll start by initiating the scraper the same way as before.

But this time, two things will change. First, we’ll import time along with scrapy because we want to create a timer to restrict how fast the bot scrapes. Also, when we parse the pages the first time, we want to only get a list of the links to each title, so we can grab information off those pages instead.

Here we make a loop to look for every link on the page that ends in film) with the yellow background in it and then we join those links together into a list of URLs, which we will send to the function parse_titles to pass further. We also slip in a timer for it to only request pages every 5 seconds. Remember, we can use the Scrapy shell to test our response.css fields to make sure we’re getting the correct data!

The real work gets done in our parse_data function, where we create a dictionary called data and then fill each key with the information we want. Again, all these selectors were found using Chrome DevTools as demonstrated before and then tested with the Scrapy shell.

The final line returns the data dictionary back to Scrapy to store.

Complete code:

Sometimes we will want to use proxies as websites will try to block our attempts at scraping.

To do this, we only need to change a few things. Using our example, in our def parse(), we need to change it to the following:

This will route the requests through your proxy server.

Deployment And Logging, Show How To Actually Manage A Spider In Production

Now it is time to run our spider. To make Scrapy start scraping and then output to a CSV file, enter the following into your command prompt:

You will see a large output, and after a couple of minutes, it will complete and you will have a CSV file sitting in your project folder.

Compiling Results, Show How To Use The Results Compiled In The Previous Steps

When you open the CSV file, you will see all the information we wanted (sorted out by columns with headings). It’s really that simple.

With data scraping, we can obtain almost any custom dataset that we want, as long as the information is publicly available. What you want to do with this data is up to you. This skill is extremely useful for doing market research, keeping information on a website updated, and many other things.

It’s fairly easy to set up your own web scraper to obtain custom datasets on your own, however, always remember that there might be other ways to obtain the data that you need. Businesses invest a lot into providing the data that you want, so it’s only fair that we respect their terms and conditions.

Additional Resources For Learning More About Scrapy And Web Scraping In General

  • “The 10 Best Data Scraping Tools and Web Scraping Tools,” Scraper API
  • “5 Tips For Web Scraping Without Getting Blocked or Blacklisted,” Scraper API
  • Parsel, a Python library to use regular expressions to extract data from HTML.
(dm, yk, il)
  • Starting Scraping

Introduction

Scraping

Before reading it, please read the warnings in my blog Learning Python: Web Scraping.

Different from Beautiful Soup or Scrapy, pyspider is a powerful spider (web crawler) system in Python:

  • Write script in Python
  • Powerful WebUI with script editor, task monitor, project manager and result viewer
  • MySQL, MongoDB, Redis, SQLite, Elasticsearch; PostgreSQL with SQLAlchemy as database backend
  • RabbitMQ, Beanstalk, Redis and Kombu as message queue
  • Task priority, retry, periodical, recrawl by age, etc…
  • Distributed architecture, Crawl Javascript pages, Python 2&3, etc…

Installation and Start

Use pip to install it:

Start it with the command below or run the run.py in the module directory:

Perhaps you might meet the error below:

There are two solutions to solve this error (See in Error to start webui service).

Change the line 209 in the file pyspider/webui/webdav.py (almost the end of the file):

Or you can change the version of wsgidav (I do not recommend this one option since it already published a new version 3.x).

After that, you could visit http://localhost:5000/ to use the system.And, in the directory where you start pyspider, there will be a directory data that is auto-generated and stores the databases of projects, tasks and results.

Get more help information with:

Starting Scraping

Creating a New Project

For the first time, there are no projects in the page.You need to create a new one by clicking the “Create” button.Input the project name and the URL you want to scrap:

Click the “Create” button and enter the script editing page:

On the right panel, it is an auto-generated sample script:

  • def on_start(self) is the entry point of the script. It will be called when you click the run button on dashboard.
  • self.crawl(url, callback=self.index_page) is the most important API here. It will add a new task to be crawled. Most of the options will be spicified via self.crawl arguments.
  • def index_page(self, response) get a Response object. response.doc is a pyquery object which has jQuery-like API to select elements to be extracted.
  • def detail_page(self, response) return a dict object as result. The result will be captured into resultdb by default. You can override on_result(self, result) method to manage the result yourself.

Other configuration:

  • @every(minutes=24*60, seconds=0) is a helper to tell the scheduler that on_start method should be called everyday.
  • @config(age=10 * 24 * 60 * 60) specified the default age parameter of self.crawl with page type index_page (when callback=self.index_page). The parameter age* can be specified via self.crawl(url, age=102460*60) (highest priority) and crawl_config (lowest priority).
  • age=10 * 24 * 60 * 60 tell scheduler discard the request if it have been crawled in 10 days. pyspider will not crawl a same URL twice by default (discard forever), even you had modified the code, it’s very common for beginners that runs the project the first time and modified it and run it the second time, it will not crawl again (read itag for solution)
  • @config(priority=2) mark that detail pages should be crawled first.

Spider Web Scraping Machine

Running Script

If you have modified the script, then click the “save” button.

Click the green “run” button on the left panel.After that, you will find a red 1 above follows:

Click the “follows” button to switch to the follows panel.It lists an index page.

Click the green play button on the right of the URL (this will invoke the index_page method).It will list all the URLs in the panel:

We could choose any one of the detail pages and click the green play button on the right (this will invoke the detail_page method).It will show the final result.In this example, we get the title and URL in json format.

Spider Web Scraping

Project Management

Back to the dashboard, you will find the project that is new created.Change the status of the project from “TODO” to “RUNNING”:

Click the “run” button.Then the project will start to run:

The output log in the background:

Click the “Results” button and check all the scraping results:

Spider Web Scraping Tools

Click one of the results and the new page will show the result in detail.

Example

UEFA European Cup Coefficients Database lists links for matches, country ranking and club ranking since season 1955/1956.The sample program below extracts the match data from season 2004/2005 to season 2017/2018.

Output looks like:

Further

Some web contents are becoming more complicated using some technology like AJAX.Then page looks different with it in browser, the information you want to extract is not in the HTML of the page.In this case, you will need the web browser developer tools(such as Web Developer Tools in Firefox or Chrome) to find the request with parameters by yourself.

Spider - A Smart Web Scraping Tool

Sometimes web page is too complex to find out the API request. It provides an option to use PhantomJS.To use PhantomJS, you should have PhantomJS installed. If you are running pyspider with all mode, PhantomJS is enabled if executable in the PATH.

More information about pyspider in detail can be found in pyspider Official Documentation or its GitHub.

References

Please enable JavaScript to view the comments powered by Disqus.blog comments powered by

Spider Web Scraping Tool

Disqus

Published

Tags