top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

Unable to use ./hdfs dfsadmin -report with HDFS Federation

0 votes
504 views

Our hadoop cluster is using HDFS Federation, but when use the following command to report the HDFS status

$ ./hdfs dfsadmin -reportreport: FileSystem viewfs://nsX/ is not an HDFS file system
Usage: hdfs dfsadmin [-report] [-live] [-dead] [-decommissioning]

It gives me the following message that viewfs is NOT HDFS filesystem. Then how can I proceed to report the hdfs status

posted Aug 24, 2015 by Bob Wise

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

1 Answer

0 votes
    HDFS DFSADMIN -REPORT 

The above command will show entire cluster status

answer Aug 24, 2015 by Deepankar Dubey
Similar Questions
+1 vote

We tried setting up HDFS name node federation set up with 2 name nodes. I am facing few issues.

Can any one help me in understanding below points?
1) how can we configure different namespaces to different name node? Where exactly we need to configure this?
2) After formatting each NN with one cluster id, Do we need to set this cluster id in hdfs-site.xml?
3) I am getting exception like, data dir already locked by one of the NN, But when dont specify data.dir, then its not showing exception.

So what could be the issue?

+2 votes

Does anyone know if the NFS HDFS gateway is currently supported on secure clusters using Kerberos for Hadoop 2.2.0? We are using HDP 2.0 and looking to use NFS gateway

+2 votes

I am writing temp files to HDFS with replication=1, so I expect the blocks to be stored on the writing node. Are there any tips, in general, for optimizing write performance to HDFS? I use 128K buffers in the write() calls. Are there any parameters that can be set on the connection or in HDFS configuration to optimize this use pattern?

0 votes

I was trying to implement a Hadoop/Spark audit tool, but l met a problem that I can't get the input file location and file name. I can get username, IP address, time, user command, all of these info from hdfs-audit.log. But When I submit a MapReduce job, I can't see input file location neither in Hadoop logs or Hadoop ResourceManager.

Does hadoop have API or log that contains these info through some configuration ?If it have, what should I configure?

...