configure slaves file on Namenode while adding new slave node



0

I am just following the lectures since I don't get chance to meet the schedule of the class. However, I have simple question regarding adding new slave node while hadoop data processes are running.

If we are configuring Slaves file on Namenode for adding new node's ip address, will that not impact the hadoop process? I understand we have not started datanode and task tracker services on new node but does hadoop consider anything since that ip address is already there but new node is not configure yet?

What will be best way if we need to add 5 slave nodes at the same time? Can we update master files' in each node and than update slaves file in namenode with all ips at same time or we should complete one node at a time? or it doesn't matter.


1 Answer(s)


0

Hi Tapan,

Adding one node in the cluster is pretty straight forward. You can check the link below:
http://fibrevillage.com/storage/628-how-to-add-a-new-datanode-to-a-running-hadoop-cluster

You can first create 5 machine using cloning which save the time of configuring.

You need to check the IP or if you are using centralised server then you can use the hostname as well if not then you need to configure the IP of the System and add the same to the Namenode(slave configuration).

After configuring, you need to either use

start-dfs.sh in Namenode
or

hadoop-daemon.sh start datanode

Hope this helps.

 

Your Answer

Click on this code-snippet-icon icon to add code snippet.

Upload Files (Maximum image file size - 1.5 MB, other file size - 10 MB, total size - not more than 50 MB)

Email
Password