Easy Web Scraping With Node.js

Posted by
on under

Web scraping is a technique used to extract data from websites using a computer program that acts as a web browser. The program requests pages from web servers in the same way a web browser does, and it may even simulate a user logging in to obtain access. It downloads the pages containing the desired data and extracts the data out of the HTML code. Once the data is extracted it can be reformatted and presented in a more useful way.

In this article I'm going to show you how to write web scraping scripts in Javascript using Node.js.

Why use web scraping?

Here are a few of examples where web scraping can be useful:

  • You have several bank accounts with different institutions and you want to generate a combined report that includes all your accounts.
  • You want to see data presented by a website in a different format. For example, the website shows a table, but you want to see a chart.
  • A web site presents related data across multiple pages. You want to see all or part of this data combined in a single report.
  • You are an app developer with apps in iTunes and several Android app stores, and you want to have a report of monthly sales across all app stores.

Web scraping can also be used in ways that are dishonest and sometimes even illegal. Harvesting of email addresses for spam purposes, or sniping Ebay auctions are examples of such uses. As a matter of principle I only use web scraping to collect and organize information that is either available to everyone (stock prices, movie showtimes, etc.) or only available to me (personal bank accounts, etc.). I avoid using this technique for profit, I just do it to simplify the task of obtaining information.

In this article I'm going to show you a practical example that implements this technique. Ready? Then let's get started!

Tools for web scraping

In its most basic form, a web scraping script just needs to have a way to download web pages and then search for data in them. All modern languages provide functions to download web pages, or at least someone wrote a library or extension that can do it, so this is not a problem. Locating and isolating data in HTML pages, however, is difficult. An HTML page has content, layout and style elements all intermixed, so a non trivial effort is required to parse and identify the interesting parts of the page.

For example, consider the following HTML page:

<html>
    <head>...</head>
    <body>
        <div id="content">
            <div id="sidebar">
            ...
            </div>
            <div id="main">
                <div class="breadcrumbs">
                ...
                </div>
                <table id="data">
                    <tr><th>Name</th><th>Address</th></tr>
                    <tr><td class="name">John</td><td class="address">Address of John</td></tr>
                    <tr><td class="name">Susan</td><td class="address">Address of Susan</td></tr>
                </table>
            </div>
        </div>
    </body>
</html>

Let's say we want to extract the names of the people that appear in the table with id="data" that is in the page. How do we get to those?

Typically the web page will be downloaded into a string, so it would be simple to just search this string for all the occurrences of <td class="name"> and extract what comes after that and until the following </td>.

But this could easily make us find incorrect data. The page could have other tables, either before or after the one we want that use the same CSS classes for some of its cells. Or worst, maybe this simple search algoritm works fine for a while, but one day the layout of the page changes so that the old <td class="name"> becomes <td align="left" class="name"> making our search find nothing.

While there is always a risk that a change to the target web page can break a scraping script, it is a good idea to be smart about how items are located in the HTML so that the script does not need to be revised every time the web site changes.

If you have ever written client-side Javascript for the browser using a library like jQuery then you know how the tricky task of locating DOM elements becomes much easier using CSS selectors.

For example, in the browser we could easily extract the names from the above web page as follows:

$('#data .name').each(function() {
    alert($(this).text());
});

The CSS selector is what goes inside jQuery's $ function, #data .name in this example, This is saying that we want to locate all the elements that are children of an element with the id data and have a CSS class name. Note that we are not saying anything about the data being in a table in this case. CSS selectors have great flexibility in how you specify search terms for elements, and you can be as specific or vague as you want.

The each function will just call the function given as an argument for all the elements that match the selector, with the this context set to the matching element. If we were to run this in the browser we would see an alert box with the name John, and then another one with the name "Susan".

Wouldn't it be nice if we could do something similar outside of the context of a web browser? Well, this is exactly what we are about to do.

Introducing Node.js

Javascript was born as a language to be embedded in web browsers, but thanks to the open source Node.js project we can now write stand-alone scripts in Javascript that can run on a desktop computer or even on a web server.

Manipulating the DOM inside a web browser is something that Javascript and libraries like jQuery do really well so to me it makes a lot of sense to write web scraping scripts in Node.js, since we can use many techniques that we know from DOM manipulation in the client-side code for the web browser.

If you would like to try the examples I will present in the rest of this article then this is the time to download and install Node.js. Installers for Windows, Linux and OS X are available at http://nodejs.org.

Node.js has a large library of packages that simplify different tasks. For web scraping we will use two packages called request and cheerio. The request package is used to download web pages, while cheerio generates a DOM tree and provides a subset of the jQuery function set to manipulate it. To install Node.js packages we use a package manager called npm that is installed with Node.js. This is equivalent to Ruby's gem or Python's easy_install and pip, it simplifies the download and installation of packages.

So let's start by creating a new directory where we will put our web scraping scripts and install these two modules in it:

$ mkdir scraping
$ cd scraping
$ npm install request cheerio

Node.js modules will be installed in the scraping/node_modules subdirectory and will only be accessible to scripts that are in the scraping directory. It is also possible to install Node.js packages globally, but I prefer to keep things organized by installing modules locally.

Now that we have all the tools installed let's see how we can implement the above scraping example using cheerio. Let's call this script example.js:

var cheerio = require('cheerio');
$ = cheerio.load('<html><head></head><body><div id="content"><div id="sidebar"></div><div id="main"><div id="breadcrumbs"></div><table id="data"><tr><th>Name</th><th>Address</th></tr><tr><td class="name">John</td><td class="address">Address of John</td></tr><tr><td class="name">Susan</td><td class="address">Address of Susan</td></tr></table></div></div></body></html>');

$('#data .name').each(function() {
    console.log($(this).text());
});

The first line imports the cheerio package into the script. The require statement is similar to #include in C/C++, require in Ruby or import in Python.

In the second line we instantiate a DOM for our example HTML, by sending the HTML string to cheerio.load(). The return value is the constructed DOM, which we store in a variable called $ to match how the DOM is accessed in the browser when using jQuery.

Once we have a DOM created we just go about business as if we were using jQuery on the client side. So we use the proper selector and the each iterator to find all the occurrences of the data we want to extract. In the callback function we use the console.log function to write the extracted data. In Node.js console.log writes to the console, so it is handy to dump data to the screen.

Here is how to run the script and what output it produces:

$ node example.js
John
Susan

Easy, right? In the following section we'll write a more complex scraping script.

Real world scraping

Let's use web scraping to solve a real problem.

The Tualatin Hills Park and Recreation District (THPRD) is a Beaverton, Oregon organization that offers area residents a number of recreational options, among them swimming. There are eight swimming pools, all in the area, each offering swimming instruction, lap swimming, open swim and a variety of other programs. The problem is that THPRD does not publish a combined schedule for the pools, it only publishes individual schedules for each pool. But the pools are all located close to each other, so many times the choice of pool is less important than what programs are offered at a given time. If I wanted to find the time slots a given program is offered at any of the pools I would need to access eight different web pages and search eight schedules.

For this example we will say that we want to obtain the list of times during the current week when there is an open swim program offered in any of the pools in the district. This requires obtaining the schedule pages for all the pools, locating the open swim entries and listing them.

Before we start, click here to open one of the pool schedules in another browser tab. Feel free to inspect the HTML for the page to familiarize yourself with the structure of the schedule.

The schedule pages for the eight pools have a URL with the following structure:

http://www.thprd.org/schedules/schedule.cfm?cs_id=<id>

The id is what selects which pool to show a schedule for. I took the effort to open all the schedules manually to take note of the names of each pool and its corresponding id, since we will need those in the script. We will also use an array with the names of the days of the week. We can scrape these names from the web pages, but since this is information that will never change we can simplify the script by incorporating the data as constants.

Web scraping skeleton

With the above information we can sketch out the structure of our scraping script. Let's call the script thprd.js:

var request = require('request');
var cheerio = require('cheerio');

pools = {
    'Aloha': 3,
    'Beaverton': 15,
    'Conestoga': 12,
    'Harman': 11,
    'Raleigh': 6,
    'Somerset': 22,
    'Sunset': 5,
    'Tualatin Hills': 2
};
days = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'];

for (pool in pools) {
    var url = 'http://www.thprd.org/schedules/schedule.cfm?cs_id=' + pools[pool];
    request(url, function(err, resp, body) {
        if (err)
            throw err;
        $ = cheerio.load(body);
        console.log(pool);
        // TODO: scraping goes here!
    });
}

We begin the script importing the two packages that we are going to use and defining the constants for the eight pools and the days of the week.

Then we download the schedule web pages of each of the pools in a loop. For this we construct the URL of each pool schedule and send it to the request function. This is an asynchronous function that takes a callback as its second argument. If you are not very familiar with Javascript this may seem odd, but in this language asynchronous functions are very common. The request() function returns immediately, so it is likely that the eight request() calls will be issued almost simultaneously and will be processed concurrently by background threads.

When a request completes its callback function will be invoked with three arguments, an error code, a response object and the body of the response. Inside the callback we make sure there is no error and then we just send the body of the response into cheerio to create a DOM from it. When we reach this point we are ready to start scraping.

We will look at how to scrape this content later, for now we just print the name of the pool as a placeholder. If you run this first version of our script you'll get a surprise:

$ node thprd.js
Tualatin Hills
Tualatin Hills
Tualatin Hills
Tualatin Hills
Tualatin Hills
Tualatin Hills
Tualatin Hills
Tualatin Hills

What? Why do we get the same pool name eight times? Shouldn't we see all the pool names here?

Javascript scoping

Remember I said above that the request() function is asynchronous? The for loop will do its eight iterations, spawning a background job in each. The loop then ends, leaving the loop variable set to the pool name that was used in the last iteration. When the callback functions are invoked a few seconds later they will all see this value and print it.

I made this mistake on purpose to demonstrate one of the main sources of confusion among developers that are used to traditional languages and are new to Javascript's asynchronous model.

How can we get the correct pool name to be sent to each callback function then?

The solution is to bind the name of the pool to the callback function at the time the callback is created and sent to the request() function, because that is when the pool variable has the correct value.

As we've seen before the callback function will execute some time in the future, after the loop in the main script completed. But the callback function can still access the loop variable even though the callback runs outside of the context of the main script. This is because the scope of Javascript functions is defined at the time the function is created. When we created the callback function the loop variable was in scope, so the variable is accessible to the callback. The url variable is also in the scope, so the callback can also make use of it if necessary, though the same problem will exist with it, its last value will be seen by all callbacks.

So what I'm basically saying is that the scope of a function is determined at the time the function is created, but the values of the variables in the scope are only retrieved at the time the function is called.

We can take advantage of these seemingly odd scoping rules of Javascript to insert any variable into the scope of a callback function. Let's do this with a simple function:

function main()
{
    var a = 1;
    var f = function() { console.log(a); }
    a = 2;
    f();
}
main();

Can you guess what the output of this script will be? The output will be 2, because that's the value of variable a at the time the function stored in variable f is invoked.

To freeze the value of a at the time f is created we need to insert the current value of a into the scope:

function main()
{
    var a = 1;
    var f = ( function(a) { return function() { console.log(a); } } )(a);
    a = 2;
    f();
}
main();

Let's analyze this alternative way to create f one step at a time:

var f = (...)(a);

We clearly see that the expression enclosed in parenthesis supposedly returns a function, and we invoke that function and pass the current value of a as an argument. This is not a callback function that will execute later, this is executing right away, so the current value of a that is passed into the function is 1.

var f = ( function(a) { return ... } )(a);

Here we see a bit more of what's inside the parenthesis. The expression is, in fact, a function that expects one argument. We called that argument a, but we could have used a different name.

In Javascript a construct like the above is called a self-executing function. You could consider this the reverse of a callback function. While a callback function is a function that is created now but runs later, a self-executing function is created and immediately executed. Whatever this function returns will be the result of the whole expression, and will get assigned to f in our example.

Why would you want to use a self-executing function when you can make any code execute directly without enclosing it inside a function? The difference is subtle. By putting code inside a function we are creating a new scope level, and that gives us the chance to insert variables into that scope simply by passing them as arguments to the self-executing function.

We know f should be a function, since later in the script we want to invoke it. So the return value of our self-executing function must be the function that will get assigned to f:

var f = ( function(a) { return function() { console.log(a); } } )(a);

Does it make a bit more sense now? The function that is assigned to f now has a parent function that received a as an argument. That a is a level closer than the original a in the scope of f, so that is the a that the scope of f sees. When you run the modified script you will get a 1 as output.

Here is how the self-executing trick can be applied to our web scraping script:

for (pool in pools) {
    var url = 'http://www.thprd.org/schedules/schedule.cfm?cs_id=' + pools[pool];
    request(url, ( function(pool) {
        return function(err, resp, body) {
            if (err)
                throw err;
            $ = cheerio.load(body);
            console.log(pool);
            // TODO: scraping goes here!
        }
    } )(pool));
}

This is pretty much identical to the simpler example above using the pool variable instead of a. Running this script again gives us the expected result:

$ node thprd.js
Aloha
Raleigh
Conestoga
Beaverton
Harman
Somerset
Tualatin Hills
Sunset

Scraping the swimming pool schedules

To be able to scrape the contents of the schedule tables we need to discover how these schedules are structured. In rough terms the schedule table is located inside a page that looks like this:

<html>
<head> ... </head>
<body>
    <div id="container">
        <div id="mainContent_lv13">
            <div id="level3body">
                <div>
                    <table id="calendar">
                        <tr class="header"> ... </tr>
                        <tr class="days">
                            <td><!-- schedule for Monday --></td>
                            <td><!-- schedule for Tuesday --></td>
                            <td><!-- schedule for Wednesday --></td>
                            <td><!-- schedule for Thursday --></td>
                            <td><!-- schedule for Friday --></td>
                            <td><!-- schedule for Saturday --></td>
                            <td><!-- schedule for Sunday --></td>
                        </tr>
                        <tr class="footer"> ... </tr>
                    </table>
                </div>
            </div>
        </div>
    </div>
</body>
</html>

Inside each of these <td> elements that hold the daily schedules there is a <div> wrapper around each scheduled event. Here is a simplified structure for a day:

<td>
    <a href="...">link</a>
    <div>
        <a href="...">
            <strong>FROM-TO</strong>
            <br>
            EVENT NAME
            <br>
        </a>
    </div>
    <div>
        <a href="...">
            <strong>FROM-TO</strong>
            <br>
            EVENT NAME
            <br>
        </a>
    </div>
    ...
</td>

Each <td> element contains a link at the top that we are not interested in, then a sequence of <div> elements, each containing the information for an event.

One way we can get to these event <div> elements is with the following selector:

$('#calendar .days td div').each(...);

The problem with the above selector, though, is that we will get all the events of all the days in sequence, so we will not know what events happen on which day.

Instead, we can separate the search in two parts. First we locate the <td> element that defines a day, then we search for <div> elements within it:

$('#calendar .days td').each(function(day) {
    $(this).find('div').each(function() {
        console.log(pool + ',' + days[day] + ',' + $(this).text());
    });
});

The function that we pass to the each() iterator receives the index number of the found element as a first argument. This is handy because for our outer search this is telling us which day we are in. We do not need an index number in the inner search, so there we do not need to use an argument in our function.

Running the script now shows the pool name, then the day of the week and then the text inside the event <div>, which has the information that we want. The text() function applied to any element of the DOM returns the constant text filtering out any HTML elements, so this gets rid of the <strong> and <br> elements that exist there and just returns the filtered text.

We are now very close. The only remaining problem is that the text we extracted from the <div> element has a lot of whitespace in it. There is whitespace at the start and end of the text and also in between the event time and event description. We can eliminate the leading and trailing whitespace with trim():

console.log(pool + ',' + days[day] + ',' + $(this).text().trim());

This leaves us with a few lines of whitespace in between the event time and the description. To remove that we can use replace():

console.log(pool + ',' + days[day] + ',' + $(this).text().trim().replace(/\s\s+/g, ','));

Note the regular expression that we use to remove the spaces requires at least two whitespace characters. This is because the event description can contain spaces as well, if we search for two or more spaces we will just find the large whitespace block in the middle and not affect the description.

When we run the script now this is what we get:

$ node thprd.js
Raleigh,Monday,6:00a-10:00p,Pool Closed
Raleigh,Tuesday,6:00a-10:00p,Pool Closed
Raleigh,Wednesday,6:00a-10:00p,Pool Closed
Raleigh,Thursday,6:00a-10:00p,Pool Closed
Raleigh,Friday,6:00a-10:00p,Pool Closed
Raleigh,Saturday,6:00a-10:00p,Pool Closed
Raleigh,Sunday,6:00a-10:00p,Pool Closed
Beaverton,Monday,7:00a-8:00a,Deep Water Aerobics (7-8)
Beaverton,Monday,7:00a-8:50a,Aquajog lane
Beaverton,Monday,7:00a-8:50a,All Age Lap
Beaverton,Monday,8:55a-10:30a,Aquajog lane^
...

And this is just a CSV version of all the pool schedules combined!

We said that for this exercise we were only interested in obtaining the open swim events, so we need to add one more filtering layer to just print the targeted events:

event = $(this).text().trim().replace(/\s\s+/g, ',').split(',');
if (event.length >= 2 && event[1].match(/open swim/i))
    console.log(pool + ',' + days[day] + ',' + event[0] + ',' + event[1]);

And now we have completed our task. Here is the final version of our web scraping script:

var request = require('request');
var cheerio = require('cheerio');

days = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'];
pools = {
    'Aloha': 3,
    'Beaverton': 15,
    'Conestoga': 12,
    'Harman': 11,
    'Raleigh': 6,
    'Somerset': 22,
    'Sunset': 5,
    'Tualatin Hills': 2
};
for (pool in pools) {
    var url = 'http://www.thprd.org/schedules/schedule.cfm?cs_id=' + pools[pool];

    request(url, (function(pool) { return function(err, resp, body) {
        $ = cheerio.load(body);
        $('#calendar .days td').each(function(day) {
            $(this).find('div').each(function() {
                event = $(this).text().trim().replace(/\s\s+/g, ',').split(',');
                if (event.length >= 2 && (event[1].match(/open swim/i) || event[1].match(/family swim/i)))
                    console.log(pool + ',' + days[day] + ',' + event[0] + ',' + event[1]);
            });
        });
    }})(pool));
}

Running the script gives us this output:

$ node thprd.js
Conestoga,Monday,4:15p-5:15p,Open Swim - M/L
Conestoga,Monday,7:45p-9:00p,Open Swim - M/L
Conestoga,Tuesday,7:30p-9:00p,Open Swim - M/L
Conestoga,Wednesday,4:15p-5:15p,Open Swim - M/L
Conestoga,Wednesday,7:45p-9:00p,Open Swim - M/L
Conestoga,Thursday,7:30p-9:00p,Open Swim - M/L
Conestoga,Friday,6:30p-8:30p,Open Swim - M/L
Conestoga,Saturday,1:00p-4:15p,Open Swim - M/L
Conestoga,Sunday,2:00p-4:15p,Open Swim - M/L
Aloha,Monday,1:05p-2:20p,Open Swim
Aloha,Monday,7:50p-8:25p,Open Swim
Aloha,Tuesday,1:05p-2:20p,Open Swim
Aloha,Tuesday,8:45p-9:30p,Open Swim
Aloha,Wednesday,1:05p-2:20p,Open Swim
Aloha,Wednesday,7:50p-8:25p,Open Swim
Aloha,Thursday,1:05p-2:20p,Open Swim
Aloha,Thursday,8:45p-9:30p,Open Swim
Aloha,Friday,1:05p-2:20p,Open Swim
Aloha,Friday,7:50p-8:25p,Open Swim
Aloha,Saturday,2:00p-3:30p,Open Swim
Aloha,Saturday,4:30p-6:00p,Open Swim
Aloha,Sunday,2:00p-3:30p,Open Swim
Aloha,Sunday,4:30p-6:00p,Open Swim
Harman,Monday,4:25p-5:30p,Open Swim*
Harman,Monday,7:30p-8:55p,Open Swim
Harman,Tuesday,4:25p-5:10p,Open Swim*
Harman,Wednesday,4:25p-5:30p,Open Swim*
Harman,Wednesday,7:30p-8:55p,Open Swim
Harman,Thursday,4:25p-5:10p,Open Swim*
Harman,Friday,2:00p-4:55p,Open Swim*
Harman,Saturday,1:30p-2:25p,Open Swim
Harman,Sunday,2:00p-2:55p,Open Swim
Beaverton,Tuesday,10:45a-12:55p,Open Swim (No Diving Well)
Beaverton,Tuesday,8:35p-9:30p,Open Swim No Diving Well
Beaverton,Thursday,10:45a-12:55p,Open Swim (No Diving Well)
Beaverton,Thursday,8:35p-9:30p,Open Swim No Diving Well
Beaverton,Saturday,2:30p-4:00p,Open Swim
Beaverton,Sunday,4:15p-6:00p,Open Swim
Sunset,Tuesday,1:00p-2:30p,Open Swim/One Lap Lane
Sunset,Thursday,1:00p-2:30p,Open Swim/One Lap Lane
Sunset,Sunday,1:30p-3:00p,Open Swim/One Lap Lane
Tualatin Hills,Monday,7:35p-9:00p,Open Swim-Diving area opens at 8pm
Tualatin Hills,Wednesday,7:35p-9:00p,Open Swim-Diving area opens at 8pm
Tualatin Hills,Sunday,1:30p-3:30p,Open Swim
Tualatin Hills,Sunday,4:00p-6:00p,Open Swim

From this point on it is easy to continue to massage this data to get it into a format that is useful. My next step would be to sort the list by day and time instead of by pool, but I'll leave that as an exercise to interested readers.

Final words

I hope this introduction to web scraping was useful to you and the example script serves you as a starting point for your own projects.

If you have any questions feel free to leave them below in the comments section.

Thanks!

Miguel

Become a Patron!

Hello, and thank you for visiting my blog! If you enjoyed this article, please consider supporting my work on this blog on Patreon!

68 comments
  • #26 Miguel Grinberg said

    @sotiris: the demo accordion in the jQuery UI page does not remove anything, it just hides the sections that are closed. But even if they are hidden they are in the DOM, you can get them. If you are using a different accordion widget that only has the content for the active section then you may need to use something like Selenium that can fake mouse clicks and execute Javascript.

  • #27 Ian said

    This is great for Node beginners like myself. How does scraping with Node compare to scraping with Python?

  • #28 Miguel Grinberg said

    @Ian: The techniques are similar. The asynchronous style of JS it can be confusing if you are used to traditional languages. On the other side, it's nice that you can scrape using jQuery-like selectors instead of having to learn a different API like Python's BeautifulSoup.

  • #29 Maqbool Fida said

    Great tutorial! Are there any client-side/browser technologies/tutorials one can use for screen-scraping?

  • #30 Miguel Grinberg said

    @Maqbool: I don't know of any. The technologies are similar, you could use jQuery to scrape, but to me it seems you have more control doing scraping outside of the browser.

  • #31 Uni Parmad said

    @Miguel Grinberg,

    I would like to scrap http://arsip.siap-ppdb.com/2012/bekasi/seleksi/sma/#!/s/31001004/1, and want to retrieve all of the name (column Nama).

    Could you advise me how to do it?

  • #32 Miguel Grinberg said

    @Uni: It looks like that website loads the data in the table through ajax, so regular scraping techniques will not find anything, since they do not interpret the Javascript code that loads the data. One approach is to use Selenium or PhantomJS, which act like a browser and issue all the ajax calls. Another option is to try to figure out what ajax calls are made to load the data and then just issue those calls directly.

  • #33 Nik said

    Very nice tutorial. Thank you Miguel.

  • #34 Ying Zhang said

    Very good stuff. What if instead of console.log the parsing, I want to collect all of the data and assign them to a global variable?

  • #35 Ying Zhang said

    I ended up using https://github.com/kriszyp/promised-io to solve the issue

  • #36 Bryan said

    learning this in 2014, useful as ever.
    thanks for sharing :D

  • #37 Antonio said

    Great tutorial. I need auto scrapping 24 h webpages about books for searching the best price but its difficult know in same pages take a default URL + id because there are very topics romances, comedy, history. I should study the all websites and extract all URLs???

  • #38 Sven said

    Hey,
    thx for that awesome Tutorial!

    Can you maybe tell me how i can store the every line of the outpout into a new line of an array or buffer or something?

    Thx!

  • #39 André said

    Thanks a lot for this tutorial Miguel! Very helpful and very well structured!

    I'll try to use it wisely! ;D

  • #40 az0000 said

    thnx

  • #41 Ruben Barreiro said

    hi.

    your article is awesome. it helped me greatly... by the way as you say " it gets a bit more complicated to handle logins" I had that issue. example: I needed to get <title> from a given url, it works just fine from too much sites but from others like facebook I needed to add a 'User-Agent' header.

  • #42 yuvi said

    very Nice article, I was unaware if these terms your articled helped me to understand. I was doing using regular java program to find out exchange rates on daily basis, but my problem was to write separate code for each site response. I was doing scrapping actually. I ll try this js.

    but I wanted to know which is more better?

  • #43 Miguel Grinberg said

    @yuvi: there is no better or worse, it's just a matter of preference.

  • #44 ramesh said

    Running this above code having this error..

    TypeError: Cannot read property 'parent' of undefined
    at exports.update (C:\scraping\node_modules\cheerio\lib\parse.js:68:27)
    at module.exports (C:\scraping\node_modules\cheerio\lib\parse.js:28:3)
    at Function.exports.load (C:\scraping\node_modules\cheerio\lib\static.js:20
    14)
    at Request._callback (C:\scraping\second.js:19:21)
    at self.callback (C:\scraping\node_modules\request\request.js:121:22)
    at Request.EventEmitter.emit (events.js:95:17)
    at ClientRequest.self.clientErrorHandler (C:\scraping\node_modules\request\
    equest.js:230:10)
    at ClientRequest.EventEmitter.emit (events.js:95:17)
    at Socket.socketErrorListener (http.js:1547:9)
    at Socket.EventEmitter.emit (events.js:95:17)

  • #45 Miguel Grinberg said

    @ramesh: you'll need to run this inside a debugger to figure out exactly where in the script this is happening. It appears the problem is while cheerio is parsing the HTML body, if you confirm that then you should probably print the HTML that you are getting to confirm it is okay.

  • #46 Dan said

    Hi Miguel,

    You mentioned that you might do a follow-up article on how to handle sites that need login information for scraping. I was wondering if you would consider doing that article and posting it? Thanks!

  • #47 Aldo said

    Hi... I was wondering if you could give me a bit of advice.
    I am pulling data for a school project from a gaming charts site.

    I am getting duplicate console log inputs because the selector I am using has the exact same info within a duplicate selector. The site structure is like this (I simplified it):

                        <table id="chart_body">
                            <tr><!-- 1 Info I need --></td>
                            <tr><!-- 2 Info I need --></td>
                                <table>
                                    <tbody>
                                        <tr> Duplicate info as 1  </tr>
                                    </tbody>
                                </table>
                            <tr><!-- 3 Info I need --></td>
                            <tr><!-- 4 Info I need --></td>
                            <tr><!-- 5 Info I need --></td>
                            <tr><!-- 6 Info I need --></td>
                        </table>
    

    I set up the script as follows:
    ...
    var $ = cheerio.load(body);
    $('tr', '#chart_body').each(function(){
    var rank = $(this).text().trim().replace(/\s\s+/g, ';');
    chart.push(rank);
    });

        console.log(chart);
    

    ...

    My console log returns:
    '1;Wolfenstein;330,703;330,703;1',
    'Wolfenstein: The New Order (PS4)Bethesda Softworks, Shooter',
    '2;Wolfenstein;188,200;188,200;1',
    'Wolfenstein: The New Order (XOne)Bethesda Softworks, Shooter',
    '3;Minecraft;126,041;215,109;2',
    'Minecraft (PS3)Sony Computer Entertainment, Adventure',

    All which is fine except the duplicate information.
    And also I can't seem to get rid of the single quote (') between returns. I just want a split after the returns.

    Thanks.

  • #48 Miguel Grinberg said

    @Aldo: the example HTML that you should is invalid, that inner table is not parented to anything, it appears just between table rows. I assume that is your own mistake, correct?

    In any case, you can just eliminate the duplicates after you are done, or else look in the actual code if there is a way to use a more specific selector that can get the top level rows but not those from the inner table.

  • #49 JD said

    for faster much more concise web scraping and data extraction check out
    https://github.com/rc0x03/node-promise-parser

  • #50 Mike said

    Can you explain the part of using CSS selectors? More specifically, why do you jump directly to the calendar tag and don't take into account the
    container, maincontent, level3body tags?

    Thanks :).

Leave a Comment