`
standalone
  • 浏览: 597987 次
  • 性别: Icon_minigender_1
  • 来自: 上海
社区版块
存档分类
最新评论

DFS Read Exception: No live nodes contain current block

阅读更多

I met this problem when I testing the open file operation latency of HDFS. In the stress test, I continuously open and close files for about 1000 seconds.

See the resolution below.
I get it from http://wiki.apache.org/hadoop/Hbase/Troubleshooting#6

Problem: "No live nodes contain current block"
See an exception with above message in logs (usually hadoop 0.18.x).
Causes

  • Slow datanodes are marked as down by DFSClient; eventually all replicas are marked as 'bad' (HADOOP-3831).

Resolution

  • Try setting dfs.datanode.socket.write.timeout to zero (in hadoop 0.18.x -- See HADOOP-3831 for detail and why not needed in hadoop 0.19.x). See the thread at message from jean-adrien for some background. Note, this is an hdfs client configuration so needs to be available in $HBASE_HOME/conf. Making the change only in $HADOOP_HOME/conf is not sufficient. Copy your amended hadoop-site.xml to the hbase conf directory or add this configuration to $HBASE_HOME/conf/hbase-site.xml.
  • Try increasing dfs.datanode.handler.count from its default of 3. This is a server configuration change so must be made in $HADOOP_HOME/conf/hadoop-site.xml. Try increasing it to 10, then by additional increments of 10. It probably does not make sense to use a value larger than the total number of nodes in the cluster.
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics