Namenode formating fails with "Cannot remove current directory"



0
Things I did:
1. Installed Cloudera CDH 3.0 with VM Player.
2. Setup Java path and ssh keys
3. By default - As is, I see via URL the namenode is running, but i couldnot see the process running via command shell. I tried to format the namenode I get the below error.
4. I couldnot stop-all and start-all as well, its pointing .ssh not accessible and file permission issues.

I would request your help
-======================

cloudera@cloudera-vm:/usr/lib/hadoop-0.20$ bin/hadoop namenode -format
14/12/01 11:14:03 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = cloudera-vm/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.2-cdh3u0
STARTUP_MSG: build = -r 81256ad0f2e4ab2bd34b04f53d25a6c23686dd14;
compiled by 'root' on Sat Mar 26 00:14:04 UTC 2011
************************************************************/
Re-format filesystem in /var/lib/hadoop-0.20/cache/hadoop/dfs/name ? (Y or N) Y
14/12/01 11:14:06 INFO util.GSet: VM type = 32-bit
14/12/01 11:14:06 INFO util.GSet: 2% max memory = 19.33375 MB
14/12/01 11:14:06 INFO util.GSet: capacity = 2^22 = 4194304 entries
14/12/01 11:14:06 INFO util.GSet: recommended=4194304, actual=4194304
14/12/01 11:14:07 INFO namenode.FSNamesystem: fsOwner=cloudera
14/12/01 11:14:07 INFO namenode.FSNamesystem: supergroup=supergroup
14/12/01 11:14:07 INFO namenode.FSNamesystem: isPermissionEnabled=false
14/12/01 11:14:07 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=1000
14/12/01 11:14:07 INFO namenode.FSNamesystem:
isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
accessTokenLifetime=0 min(s)
14/12/01 11:14:07 ERROR namenode.NameNode: java.io.IOException: Cannot
remove current directory:
/var/lib/hadoop-0.20/cache/hadoop/dfs/name/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:303)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1244)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1263)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1092)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1201)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1217)

3 Answer(s)


0

hi Ramaswamy,

As you have installed Cloudera, you don't need to perform any other steps. Looking at the points looks like the installation is skewed because of java path setup and ssh key generation.

We are using cloudera because we don't want the students to spend time in installation.

You can directly start executing mapreduce programs.

0

You can't format if your using vmplayer to run cloudra Hadoop. To run the formatting commands you need to setup a local Hadoop environment in Ubuntu os

0

Sure. Thanks All. However, Running jps command doesn't list hadoop daemon processes. How do know the name and other nodes are running from CLI?.

Your Answer

Click on this code-snippet-icon icon to add code snippet.

Upload Files (Maximum image file size - 1.5 MB, other file size - 10 MB, total size - not more than 50 MB)

Email
Password