Carl Nolan’s ramblings on development
In a previous post I talked about Hadoop Binary Streaming for the processing of Microsoft Office Word documents. However, due to there popularity, I thought inclusion for support of Adobe PDF documents would be beneficial. To this end I have updated the source code to support processing of both “.docx” and “.pdf” documents.
To support reading PDFs I have used the open source library provided by iText (http://itextpdf.com/). iText is a library that allows you to read, create and manipulate PDF documents (http://itextpdf.com/download.php). The original code was written in Java but a port for .Net is also available (http://sourceforge.net/projects/itextsharp/files/).
In using these libraries I only use the PdfReader class, from the Core library. This class allows one to derive the page count, and the Author from an Info property.
To use the library in Hadoop one just has to specify a file property for the iTextSharp core library:
-file "C:\Reference Assemblies\itextsharp.dll"
This assumes the downloaded and extracted DLL has been copied to and referenced from the “Reference Assemblies” folder.
To support the PDF document inclusion only two changes were necessary to the code.
Firstly, a new Mapper was defined that supports the processing of a PdfReader type and returns the author and pages for the document:
Secondly one has to call the correct mapper based on the document type; namely the file extension:
And that is it.
In Microsoft Word, if one needs to process the actual text/words of a document, this is relatively straight-forward:
Using iText the text/word extraction code is a little more complex but relativity easy. An example can be found here:
Sir,Your idea is great,as I want same with some modification.
The theme is,I have thousand of files pdf,txt,docx in a folder.I want to extracts most occuring top 10 words for each file using Hadoop/any software which gives quick results
I totally don't know C# & .NET,I try to understand the code,but I can't.
I know little bit of Java.Can u tell me how to modify it into Java Program?
I will be thankful,if u convert it completely into MapReduce form as many peoples are using Java for Hadoop programming
You can mail me also - firstname.lastname@example.org
Hi Sagar you can use Hadoop for document processing in this fashion, provided you have sufficient volume. If you know Java you can use the Java binary reader that comes with this code as the reader for submitting a MR job written just in Java.
HI Sir -
I want to load pdf, word format data in the Hadoop systems. whats the best way to load the data in HDFS.