Carl Nolan’s ramblings on development
Over the past month I have been working on a framework to allow composition and submission of MapReduce jobs using .Net. I have put together two previous blog posts on this, so rather than put together a third on the latest change I thought I would create a final composite post. To understand why lets run through a quick version history of the code:
The latest change takes advantage of the fact the objects are serialized in Binary format. This change has allowed for the base abstract classes to move away from object based APIs to one based on Generics. This change hopefully greatly simplifies the creation of .Net MapReduce jobs.
As always to submit MapReduce jobs one can use the following command line syntax:
MSDN.Hadoop.Submission.Console.exe -input "mobile/data/debug/sampledata.txt" -output "mobile/querytimes/debug" -mapper "MSDN.Hadoop.MapReduceFSharp.MobilePhoneQueryMapper, MSDN.Hadoop.MapReduceFSharp" -reducer "MSDN.Hadoop.MapReduceFSharp.MobilePhoneQueryReducer, MSDN.Hadoop.MapReduceFSharp" -file "%HOMEPATH%\MSDN.Hadoop.MapReduce\Release\MSDN.Hadoop.MapReduceFSharp.dll"
The mapper and reducer parameters are .Net types that derive from a base Map and Reduce abstract classes shown below. The input, output, and files options are analogous to the standard Hadoop streaming submissions. The mapper and reducer options (more on a combiner option later) allow one to define a .Net type derived from the appropriate abstract base classes. Under the covers standard Hadoop Streaming is being used, where controlling executables are used to handle the StdIn and StdOut operations and activating the required .Net types. The “file” parameter is required to specify the DLL for the .Net type to be loaded at runtime, in addition to any other required files.
As always the source can be downloaded from:
The following definitions outline the abstract base classes from which one needs to derive. Lets start with the C# definitions:
The equivalent F# definitions are:
The objective in defining these base classes was to not only support creating .Net Mapper and Reducers but also to provide a means for Setup and Cleanup operations to support In-Place Mapper/Combiner/Reducer optimizations, utilize IEnumerable and sequences for publishing data from all classes, and finally provide a simple submission mechanism analogous to submitting Java based jobs.
The usage of the Generic types V2 and V3 equate to the names used in the Java definitions. The current type of the input into the Mapper is a string (this normally being V1). This is needed as the mapper, in Streaming jobs, performs the projection from the textual input.
For each class a Setup function is provided to allow one to perform tasks related to the instantiation of the class. The Mapper’s Map and Cleanup functions return an IEnumerable consisting of tuples with a Key/Value pair. It is these tuples that represent the mappers output. The returned types are written to file using binary serialization.
The Combiner and Reducer takes in an IEnumerable, for each key, and reduces this into a key/value enumerable. Once again the Cleanup allows for return values, to allow for In-Reducer optimizations.
As one can see from the abstract class definitions the framework also provides support for submitting jobs that support Binary and XML based Mappers. To support using Mappers derived from these types a “format” submission parameter is required. Supported values being Text, Binary, and XML; the default value being “Text”.
To submit a binary streaming job one just has to use a Mapper derived from the MapperBaseBinary abstract class and use the binary format specification:
In this case the input into the Mapper will be a Stream object that represents a complete binary document instance.
To submit an XML streaming job one just has to use a Mapper derived from the MapperBaseXml abstract class and use the XML format specification, along with a node to be processed within the XML documents:
-format XML –nodename Node
In this case the input into the Mapper will be an XElement node derived from the XML document based on the nodename parameter.
Using multiple keys from the Mapper is a two-step process. Firstly the Mapper needs to be modified to output a string based key in the correct format. This is done by passing the set of string key values into the Utilities.FormatKeys() function. This concatenates the keys using the necessary tab character. Secondly, the job has to be submitted specifying the expected number of keys:
MSDN.Hadoop.Submission.Console.exe -input "stores/demographics" -output "stores/banking" -mapper "MSDN.Hadoop.MapReduceFSharp.StoreXmlElementMapper, MSDN.Hadoop.MapReduceFSharp" -reducer "MSDN.Hadoop.MapReduceFSharp.StoreXmlElementReducer, MSDN.Hadoop.MapReduceFSharp" -file "%HOMEPATH%\Projects\MSDN.Hadoop.MapReduce\Release\MSDN.Hadoop.MapReduceFSharp.dll" -nodename Store -format Xml -numberKeys 2
This parameter equates to the necessary Hadoop job configuration parameter.
To demonstrate the submission framework, here are some sample Mappers and Reducers with the corresponding command line submissions:
C# Mobile Phone Range (with In-Mapper optimization)
Calculates the mobile phone query time range for a device with an In-Mapper optimization yielding just the Min and Max values:
MSDN.Hadoop.Submission.Console.exe -input "mobilecsharp/data" -output "mobilecsharp/querytimes" -mapper "MSDN.Hadoop.MapReduceCSharp.MobilePhoneRangeMapper, MSDN.Hadoop.MapReduceCSharp" -reducer "MSDN.Hadoop.MapReduceCSharp.MobilePhoneRangeReducer, MSDN.Hadoop.MapReduceCSharp" -file "%HOMEPATH%\MSDN.Hadoop.MapReduceCSharp\Release\MSDN.Hadoop.MapReduceCSharp.dll"
C# Mobile Min (with Mapper, Combiner, Reducer)
Calculates the mobile phone minimum time for a device with a combiner yielding just the Min value:
MSDN.Hadoop.Submission.Console.exe -input "mobilecsharp/data" -output "mobilecsharp/querytimes" -mapper "MSDN.Hadoop.MapReduceCSharp.MobilePhoneMinMapper, MSDN.Hadoop.MapReduceCSharp" -reducer "MSDN.Hadoop.MapReduceCSharp.MobilePhoneMinReducer, MSDN.Hadoop.MapReduceCSharp" -combiner "MSDN.Hadoop.MapReduceCSharp.MobilePhoneMinCombiner, MSDN.Hadoop.MapReduceCSharp" -file "%HOMEPATH%\MSDN.Hadoop.MapReduceCSharp\Release\MSDN.Hadoop.MapReduceCSharp.dll"
F# Mobile Phone Query
Calculates the mobile phone range and average time for a device:
MSDN.Hadoop.Submission.Console.exe -input "mobile/data" -output "mobile/querytimes" -mapper "MSDN.Hadoop.MapReduceFSharp.MobilePhoneQueryMapper, MSDN.Hadoop.MapReduceFSharp" -reducer "MSDN.Hadoop.MapReduceFSharp.MobilePhoneQueryReducer, MSDN.Hadoop.MapReduceFSharp" -file "%HOMEPATH%\MSDN.Hadoop.MapReduceFSharp\Release\MSDN.Hadoop.MapReduceFSharp.dll"
F# Store XML (XML in Samples)
Calculates the total revenue, within the store XML, based on demographic data; also demonstrating multiple keys:
MSDN.Hadoop.Submission.Console.exe -input "stores/demographics" -output "stores/banking" -mapper "MSDN.Hadoop.MapReduceFSharp.StoreXmlElementMapper, MSDN.Hadoop.MapReduceFSharp" -reducer "MSDN.Hadoop.MapReduceFSharp.StoreXmlElementReducer, MSDN.Hadoop.MapReduceFSharp" -file "%HOMEPATH%\MSDN.Hadoop.MapReduceFSharp\bin\Release\MSDN.Hadoop.MapReduceFSharp.dll" -nodename Store -format Xml
F# Binary Document (Word and PDF Documents)
Calculates the pages per author for a combination of Office Word and PDF documents:
MSDN.Hadoop.Submission.Console.exe -input "office/documents" -output "office/authors" -mapper "MSDN.Hadoop.MapReduceFSharp.OfficePageMapper, MSDN.Hadoop.MapReduceFSharp" -reducer "MSDN.Hadoop.MapReduceFSharp.OfficePageReducer, MSDN.Hadoop.MapReduceFSharp" -combiner "MSDN.Hadoop.MapReduceFSharp.OfficePageReducer, MSDN.Hadoop.MapReduceFSharp" -file "%HOMEPATH%\MSDN.Hadoop.MapReduceFSharp\bin\Release\MSDN.Hadoop.MapReduceFSharp.dll" -file "C:\Reference Assemblies\itextsharp.dll" -format Binary
To support some additional Hadoop Streaming options a few optional parameters are supported.
As expected this specifies the maximum number of reducers to use.
The option turns on verbose mode and specifies a job configuration to keep failed task outputs.
To view the the supported options one can use a help parameters, displaying:
Command Arguments: -input (Required=true) : Input Directory or Files -output (Required=true) : Output Directory -mapper (Required=true) : Mapper Class -reducer (Required=true) : Reducer Class -combiner (Required=false) : Combiner Class (Optional) -format (Required=false) : Input Format |Text(Default)|Binary|Xml| -numberReducers (Required=false) : Number of Reduce Tasks (Optional) -numberKeys (Required=false) : Number of MapReduce Keys (Optional) -file (Required=true) : Processing Files (Must include Map and Reduce Class files) -nodename (Required=false) : XML Processing Nodename (Optional) -debug (Required=false) : Turns on Debugging Options
The provided submission framework works from a command-line. However there is nothing to stop one submitting the job using a UI; albeit a command console is opened. To this end I have put together a simple UI that supports submitting Hadoop jobs.
This simple UI supports all the necessary options for submitting jobs.
As mentioned the actual Executables and Source code can be downloaded from:
The source includes, not only the .Net submission framework, but also all necessary Java classes for supporting the Binary and XML job submissions. This relies on a custom Streaming JAR which should be copied to the Hadoop lib directory, there are two versions of the Streaming jar; one for running in azure and one for when running local. The difference is that they have been compiled with different versions of the Java compiler. Just remember to use the appropriate version (dropping the –local and –azure prefixes) when copying to your Hadoop lib folder.
To use the code one just needs to reference the EXE’s in the Release directory. This folder also contains the MSDN.Hadoop.MapReduceBase.dll that contains the abstract base class definitions.
In a separate post I will cover what is actually happening under the covers.
As always if you find the code useful and/or use this for your MapReduce jobs, or just have some comments, please do let me know.
Why use this or even write hadoop jobs with java when there is pig. More than %80 of hadoop jobs are being run with pig scripts in large corps.
I do like Pig, but not everything that can be done with MapReduce can be achieved with Pig.
I wanted to use the dll for passing a job to hadoop.But how will I pass the credentials..login and password for hadoop.like how will the job in my account??
For .eg before i used to login to hadooponazure and create a job using C# mapper and reducer.But now I can pass the mapper and reducer class but how will I pass the credential for my hadooponazure account?
Currently I have not done anything around hooking up a RunAs for the command options, but this would be simple enough.
Currently it will run under the account the submission job runs under.
Along with multiple mapper, can we expect support of multiple combiner in the first release.
At the moment one can only specify the number of reducers.
Thanks Carl for the info.