Once your provide following information to setup your Hadoop cluster in Azure:
The cluster setup process, configure your cluster depend on your settings and finally you get your cluster ready to accept Hadoop Map/Reduce Jobs.
If you want to understand how the head node and worker nodes were setup internally, here is some information to you
Head node is actually a Windows Azure web role running. You will find Head Node Details as below:
About Worker Node which is actually a worker role having endpoint directly communicating with HeadNode WebRole, here are some details important to you:
Isotope WorkerNode – Create X instances depend on your cluster setup
For Example a Small cluster use 4 nodes in that case the worker node will look like as below:
Each WorkerNode gets its own IP Address and Port and following two ports are used for individual job tracker on each node and HDFS management:
If you remote login to your cluser and check the name node summary using http://localhost:50070/dfshealth.jsp you will see the exact same worker node IP Address as described here: details
16 files and directories, 2 blocks = 18 total. Heap Size is 271.88 MB / 3.56 GB (7%)
If you look your C:\Resources\<GUID.IsotopeHeadNode_IN_0.xml you will learn more about these details. This XML file is the same which you finds on any Web or Worker Role and the configuration in XML will help you a lot on this regard.
Keywords: Windows Azure, Hadoop, Apache, BigData, Cloud, MapReduce