Consistently Infrequent

November 14, 2011

GScholarXScraper: Hacking the GScholarScraper function with XPath

Filed under: R — Tags: , , , , , , , — Tony Breyal @ 12:36 am

Kay Cichini recently wrote a word-cloud R function called GScholarScraper on his blog which when given a search string will scrape the associated search results returned by Google Scholar, across pages, and then produce a word-cloud visualisation.

This was of interest to me because around the same time I posted an independent Google Scholar scraper function  get_google_scholar_df() which does a similar job of the scraping part of Kay’s function using XPath (whereas he had used Regular Expressions). My function worked as follows: when given a Google Scholar URL it will extract as much information as it can from each search result on the URL webpage  into different columns of a dataframe structure.

In the comments of his blog post I figured it’d be fun to hack his function to provide an XPath alternative, GScholarXScraper. Essentially it’s still the same function he wrote and therefore full credit should go to Kay on this one as he fully deserves it – I certainly had no previous idea how to make a word cloud, plus I hadn’t used the tm package in ages (to the point where I’d forgotten most of it!). The main changes I made were as follows:

  • Restructure internal code of GScholarScraper into a series of local functions which each do a seperate job (this made it easier for me to hack because I understood what was doing what and why).
  • As far as possible, strip out Regular Expressions and replace with XPath alternatives (made possible via the XML package). Hence the change of name to GScholarXScraper. Basically, apart from a little messing about with the generation of the URLs I just copied over my get_google_scholar_df() function and removed the Regular Expression alternatives. I’m not saying one is better than the other but for me personally, I find XPath shorter and quicker to code but either is a good approach for web scraping like this (note to self: I really need to lean more about regular expressions because they’re awesome!)
  • Vectorise a few of the loops I saw (it surprises me how second nature this has become to me – I used to find the *apply family of functions rather confusing but thankfully not so much any more!).
  • Make use of getURL from the RCurl package (I was getting some mutibyte string problems originally when using readLines but this approach automatically fixed it for me).
  • Add option to make a word-cloud from either the “title” or the “description” fields of the Google Scholar search results
  • Added steaming via the Rstem package because I couldn’t get the Snowball package to install with my version of java. This was important to me because I was getting word clouds with variations of the same word on it e.g. “game”, “games”, “gaming”.
  • Forced use of URLencode() on generation of URLs to automatically avoid problems with search terms like “Baldur’s Gate” which would otherwise fail.

I think that’s pretty much everything I added. Anyway, here’s how it works (link to full code at end of post):

#EXAMPLE 1: produces a word cloud based the 'title'' field of Google Scholar search results and an input search string
GScholarXScraper(search.str = "Baldur's Gate", field = "title", write.table = FALSE, stem = TRUE)

#              word freq
# game         game   71
# comput     comput   22
# video       video   13
# learn       learn   11
# [TRUNC...]
#
#
# Number of titles submitted = 210
#
# Number of results as retrieved from first webpage = 267
#
# Be aware that sometimes titles in Google Scholar outputs are truncated - that is why, i.e., some mandatory intitle-search strings may not be contained in all titles

I think that’s kind of cool (sorry about the resolution clarity as I can’t seem to add .svg images on here) and corresponds to what I would expect for a search about the legendary Baldur’s Gate computer role playing game :)  The following is produced if we look at the ‘description’ filed instead of the ‘title’ field:

# EXAMPLE 2: produces a word cloud based the 'description' field of Google Scholar search results and an input search string
GScholarXScraper(search.str = "Baldur's Gate", field = "description", write.table = FALSE, stem = TRUE)

#                word freq
# page           page  147
# gate           gate  132
# game           game  130
# baldur       baldur  129
# roleplay   roleplay   21
# [TRUNC...]
#
# Number of titles submitted = 210
#
# Number of results as retrieved from first webpage = 267
#
# Be aware that sometimes titles in Google Scholar outputs are truncated - that is why, i.e., some mandatory intitle-search strings may not be contained in all titles

Not bad and is better than the ‘title’ field. I could see myself using the text mining and word cloud functionality with other projects I’ve been playing with such as Facebook, Google+, Yahoo search pages, Google search pages, Bing search pages… could be fun!

One of the drawbacks about the ‘title’ and ‘description’ fields are that they are truncated. It would be nice to crawl to the webpage of each result URL and scrape the text from there and add that as an ‘abstract’ field for more useful results. If I get time I might add that.

Many thanks again to Kay for making his code publicly available so that I could play with it and improve my programming skill set. I had fun doing this and improved my other *XScraper functions in the process!

Code:

Full code for GScholarXScraper can be found here: https://github.com/tonybreyal/Blog-Reference-Functions/blob/master/R/GScholarXScraper/GScholarXScraper

Original GSchloarScraper code is here: https://docs.google.com/document/d/1w_7niLqTUT0hmLxMfPEB7pGiA6MXoZBy6qPsKsEe_O0/edit?hl=en_US

Full code for just the XPath scraping function is here: https://github.com/tonybreyal/Blog-Reference-Functions/blob/master/R/googleScholarXScraper/googleScholarXScraper.R


About these ads

2 Comments »

  1. Hello,

    I am trying to use this script, but am encountering some errors. I am new to R—please excuse my lack of knowledge, and any idiocies that go along with that.

    I have saved the code, verbatim from the source, into a file called GScholarXScraper.R. I then called to it with

    source(“GScholarXScraper.R”)

    but am receiving this message:

    Error in source(“GScholarXScraper.R”) :
    GScholarXScraper.R:18:1: unexpected input
    17: GScholarXScraper <- function(search.str, field = "title", write.table = FALSE, stem = TRUE) {
    18: ¬
    ^

    Is there an issue with how am I trying to load or utilize this function? If not, what else could be the issue?

    After I have sourced the file correctly, is the proper way to

    Thank you for your help.

    Comment by Scott Orr — May 16, 2012 @ 1:04 am

  2. Good info. Lucky me I recently found your blog by accident
    (stumbleupon). I’ve saved as a favorite for later!

    Comment by forums spain buying car spain Fuengirola — November 1, 2012 @ 1:07 pm


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

The Shocking Blue Green Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 76 other followers

%d bloggers like this: