您好,欢迎访问一九零五行业门户网

hadoop too many files异常处理

今天 hadoop 集群任务执行失败了。报错信息如下 2013-10-26 08:00:03,229 error server.tthreadpoolserver tthreadpoolserver.java:run182 - error occurred during processing of message. at org.apache.hadoop.hive.service.hiveserver$thrifthiveprocess
今天hadoop集群任务执行失败了。报错信息如下
2013-10-26 08:00:03,229 error server.tthreadpoolserver (tthreadpoolserver.java:run(182)) - error occurred during processing of message.    at org.apache.hadoop.hive.service.hiveserver$thrifthiveprocessorfactory.getprocessor(hiveserver.java:553)    at org.apache.thrift.server.tthreadpoolserver$workerprocess.run(tthreadpoolserver.java:169)    at java.util.concurrent.threadpoolexecutor$worker.runtask(threadpoolexecutor.java:886)    at java.util.concurrent.threadpoolexecutor$worker.run(threadpoolexecutor.java:908)    at java.lang.thread.run(thread.java:662)    at org.apache.hadoop.hive.ql.session.sessionstate.start(sessionstate.java:277)    at org.apache.hadoop.hive.service.hiveserver$hiveserverhandler.init(hiveserver.java:136)    at org.apache.hadoop.hive.service.hiveserver$thrifthiveprocessorfactory.getprocessor(hiveserver.java:550)    ... 4 more    at org.apache.hadoop.hive.ql.metadata.hiveutils.getauthorizeprovidermanager(hiveutils.java:199)    at org.apache.hadoop.hive.ql.session.sessionstate.start(sessionstate.java:272)    ... 6 morecaused by: java.lang.runtimeexception: java.io.filenotfoundexception: /home/hadoop/hadoop-0.20.205.0/conf/mapred-site.xml (too many open files)    at org.apache.hadoop.conf.configuration.loadresource(configuration.java:1231)    at org.apache.hadoop.conf.configuration.loadresources(configuration.java:1093)    at org.apache.hadoop.conf.configuration.getprops(configuration.java:1037)    at org.apache.hadoop.conf.configuration.set(configuration.java:438)    at org.apache.hadoop.hive.conf.hiveconf.setvar(hiveconf.java:762)    at org.apache.hadoop.hive.conf.hiveconf.setvar(hiveconf.java:770)    at org.apache.thrift.server.tthreadpoolserver$workerprocess.run(tthreadpoolserver.java:169)    at java.util.concurrent.threadpoolexecutor$worker.runtask(threadpoolexecutor.java:886)    at java.util.concurrent.threadpoolexecutor$worker.run(threadpoolexecutor.java:908)    at java.lang.thread.run(thread.java:662)caused by: java.lang.runtimeexception: org.apache.hadoop.hive.ql.metadata.hiveexception: java.lang.runtimeexception: java.io.filenotfoundexception: /home/hadoop/hadoop-0.20.205.0/conf/core-site.xml (too many open files)    at org.apache.hadoop.hive.ql.session.sessionstate.start(sessionstate.java:277)    at org.apache.hadoop.hive.service.hiveserver$hiveserverhandler.init(hiveserver.java:136)    at org.apache.hadoop.hive.service.hiveserver$thrifthiveprocessorfactory.getprocessor(hiveserver.java:550)    ... 4 morecaused by: org.apache.hadoop.hive.ql.metadata.hiveexception: java.lang.runtimeexception: java.io.filenotfoundexception: /home/hadoop/hadoop-0.20.205.0/conf/core-site.xml (too many open files)    at org.apache.hadoop.hive.ql.metadata.hiveutils.getauthorizeprovidermanager(hiveutils.java:199)    at org.apache.hadoop.hive.ql.session.sessionstate.start(sessionstate.java:272)    ... 6 more
debian系统解决方式
ulimit -hsn 32768
原文地址:hadoop too many files异常处理, 感谢原作者分享。
其它类似信息

推荐信息