hadoop daemons not running
1 Answer(s)
Abhijit-Dezyre Support
Hi Patrice,
If you are using cloudera, then go to cloudera manager and start all the services again or restart the whole cluster.
If you have install the apache hadoop in your system manually, then you can follow these steps to recover the cluster state:
1. Delete(rm) the hdfs folder
2. Create(mkdir) the hdfs folder
3. Assign the ownership(chown) of hdfs folder
4. format the namenode (hadoop namenode -format)
5. start the hadoop services(start-all.sh)
Note: Before doing this make sure hadoop services must be stop(stop-all.sh).
System IP and IP in configuration file must be same.
Hope this helps.
Thanks
Feb 09 2016 12:06 AM
If you are using cloudera, then go to cloudera manager and start all the services again or restart the whole cluster.
If you have install the apache hadoop in your system manually, then you can follow these steps to recover the cluster state:
1. Delete(rm) the hdfs folder
2. Create(mkdir) the hdfs folder
3. Assign the ownership(chown) of hdfs folder
4. format the namenode (hadoop namenode -format)
5. start the hadoop services(start-all.sh)
Note: Before doing this make sure hadoop services must be stop(stop-all.sh).
System IP and IP in configuration file must be same.
Hope this helps.
Thanks