js333 > 计算机互联网 > 金沙js333娱乐场出现异常的处理方法,HBase多线程

原标题:金沙js333娱乐场出现异常的处理方法,HBase多线程

浏览次数:50 时间:2019-11-05

不久前在hbase中成立快速照相的时候遭遇了如下错误:

  public static HConnection getConnection(Configuration conf)
  throws ZooKeeperConnectionException {
    HConnectionKey connectionKey = new HConnectionKey(conf);
    synchronized (HBASE_INSTANCES) {
      HConnectionImplementation connection = HBASE_INSTANCES.get(connectionKey);
      if (connection == null) {
        connection = new HConnectionImplementation(conf, true);
        HBASE_INSTANCES.put(connectionKey, connection);
      }
      connection.incCount();
      return connection;
    }
  }

 

听别人讲Hadoop集群的HBase集群的布署 http://www.linuxidc.com/Linux/2013-03/80815.htm‘

每一个HTable instance都有二个HConnection对象,它承担与Zookeeper和后来的HBase Cluster建构链接(比如cluster中定位region,locations的cache,当region移动后再次校准卡塔尔,它由HConnectionManager来保管

查看防火墙状态:firewall-cmd --state

并发这种主题材料的由来是因为和服务器通讯超时招致的。所以供给将上面五个参数的暗中同意值举行调治。

金沙js333娱乐场,wormhole的reader和writer会分别起三个ThreadPoolExecutor,出错是在writer端的flush阶段,也正是最终三次批量插入操作。由于作者的reader是每一个thread三个htable instance正常,而writer是共用了三个singleton HBaseClient,然后用ThreadLocal去保险每贰个thread具有二个地面htable对象,有希望有不当,最简便的法子是把writer端不用singleton HBaseClient,难题应当消除,不过没搞清root cause,不爽啊。。。

关门防火墙:ufw disable

HBase 结点之间时间不均等招致regionserver运营战败 http://www.linuxidc.com/Linux/2013-06/86655.htm

2013-07-08 09:30:02,568 [pool-2-thread-1] org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1631) WARN  client.HConnectionManager$HConnectionImplementation - Failed all from region=t1,,1373246892580.877bb26da1e4aed541915870fa924224., hostname=test89.hadoop, port=60020
java.util.concurrent.ExecutionException: java.io.IOException: Call to test89.hadoop/10.1.77.89:60020 failed on local exception: java.io.InterruptedIOException: Interruped while waiting for IO on channel java.nio.channels.SocketChannel[connected local=/10.1.77.84:51032 remote=test89.hadoop/10.1.77.89:60020]. 59999 millis timeout left.
 at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
 at java.util.concurrent.FutureTask.get(FutureTask.java:83)
 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1601)
 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1453)
 at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:936)
 at org.apache.hadoop.hbase.client.HTable.put(HTable.java:783)
 at com.dp.nebula.wormhole.plugins.common.HBaseClient.flush(HBaseClient.java:121)
 at com.dp.nebula.wormhole.plugins.writer.hbasewriter.HBaseWriter.commit(HBaseWriter.java:112)
 at com.dp.nebula.wormhole.engine.core.WriterThread.call(WriterThread.java:52)
 at com.dp.nebula.wormhole.engine.core.WriterThread.call(WriterThread.java:1)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
 at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Call to test89.hadoop/10.1.77.89:60020 failed on local exception: java.io.InterruptedIOException: Interruped while waiting for IO on channel java.nio.channels.SocketChannel[connected local=/10.1.77.84:51032 remote=test89.hadoop/10.1.77.89:60020]. 59999 millis timeout left.
 at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:1030)
 at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:999)
 at org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:104)
​ at com.sun.proxy.$Proxy5.multi(Unknown Source)
 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1430)
 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1428)
 at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:215)
 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1437)
 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1425)
 ... 5 more
Caused by: java.io.InterruptedIOException: Interruped while waiting for IO on channel java.nio.channels.SocketChannel[connected local=/10.1.77.84:51032 remote=test89.hadoop/10.1.77.89:60020]. 59999 millis timeout left.
2013-07-08 09:30:03,579 [pool-2-thread-6] com.dp.nebula.wormhole.engine.core.WriterThread.call(WriterThread.java:56) ERROR core.WriterThread - Exception occurs in writer thread!
com.dp.nebula.wormhole.common.WormholeException: java.io.IOException: <SPAN style="COLOR: #ff0000">org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@b7c96a9 closed</SPAN>
 at com.dp.nebula.wormhole.plugins.writer.hbasewriter.HBaseWriter.commit(HBaseWriter.java:114)
 at com.dp.nebula.wormhole.engine.core.WriterThread.call(WriterThread.java:52)
 at com.dp.nebula.wormhole.engine.core.WriterThread.call(WriterThread.java:1)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
 at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@b7c96a9 closed
 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:877)
 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:857)
 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1568)
 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1453)
 at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:936)
 at org.apache.hadoop.hbase.client.HTable.put(HTable.java:783)
 at com.dp.nebula.wormhole.plugins.common.HBaseClient.flush(HBaseClient.java:121)
 at com.dp.nebula.wormhole.plugins.writer.hbasewriter.HBaseWriter.commit(HBaseWriter.java:112)
 ... 7 more

CentOS6.0

Hadoop+ZooKeeper+HBase集群配置 http://www.linuxidc.com/Linux/2013-06/86347.htm

如今在写wormhole的HBase plugin,要求各自达成hbase reader和hbase writer。

(2)从datanode主机ping master节点的主机名(注意也是节点的主机名卡塔尔,假诺ping不通,原因大概是datenode节点的/etc/hosts 未配备主机名与IP地址的照射关系,补全主机名与IP地址的投射关系。

Hadoop集群安装&HBase实验景况搭建 http://www.linuxidc.com/Linux/2013-04/83560.htm

HConnectionManager内部有LRU MAP => HBASE_INSTANCES的静态变量作为cache,key为HConnectionKey,富含了username和点名的properties(由传进去的conf提取), value便是HConnection具体实现HConnectionImplementation,由于传入进去的conf都毫无二致,所以都指向同一个HConnectionImplementation,最终会调用connection.incCount()将client reference count加1

Hadoop安装配置笔记之-HBase完全布满形式安装 http://www.linuxidc.com/Linux/2012-12/76947.htm

2、hbase.snapshot.master.timeoutMillis

金沙js333娱乐场 1


Hadoop安装配置笔记之-HBase完全分布格局安装 http://www.linuxidc.com/Linux/2012-12/76947.htm

Hadoop+ZooKeeper+HBase集群配置 http://www.linuxidc.com/Linux/2013-06/86347.htm

Ubuntu(ubuntu-12.04-desktop-amd64)

单机版搭建HBase景况图像和文字化教育程安详严整 http://www.linuxidc.com/Linux/2012-10/72959.htm 

在测量检验的时候会报错如下:

正文恒久更新链接地址:http://www.linuxidc.com/Linux/2015-08/121012.htm

Hadoop+HBase搭建云存款和储蓄计算 PDF http://www.linuxidc.com/Linux/2013-05/83844.htm

Hadoop+HBase搭建云存款和储蓄计算PDF http://www.linuxidc.com/Linux/2013-05/83844.htm

传说Hadoop集群的HBase集群的安插 http://www.linuxidc.com/Linux/2013-03/80815.htm‘

金沙js333娱乐场 2

HBase 结点之间时间不相同等导致regionserver运营退步 http://www.linuxidc.com/Linux/2013-06/86655.htm

at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)

这五个值的暗许值为60000,单位是微秒,也即1min。若是通讯时间超越该值,就能够报上边的荒谬。

连带阅读

金沙js333娱乐场 3

1、hbase.snapshot.region.timeout

public HTable(Configuration conf, final byte [] tableName)
  throws IOException {
    this.tableName = tableName;
    this.cleanupPoolOnClose = this.cleanupConnectionOnClose = true;
    if (conf == null) {
      this.connection = null;
      return;
    }
    this.connection = HConnectionManager.getConnection(conf);
    this.configuration = conf;
    int maxThreads = conf.getInt("hbase.htable.threads.max", Integer.MAX_VALUE);
    if (maxThreads == 0) {
      maxThreads = 1; // is there a better default?
    }
    long keepAliveTime = conf.getLong("hbase.htable.threads.keepalivetime", 60);
    ((ThreadPoolExecutor)this.pool).allowCoreThreadTimeOut(true);
    this.finishSetup();
  }

Hadoop+HBase搭建云存款和储蓄总括 PDF http://www.linuxidc.com/Linux/2013-05/83844.htm

01.hbase(main):004:0> snapshot 'booking', 'booking-snapshot-20140912' 
02. 
03.ERROR: org.apache.Hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { ss=booking-snapshot-20140912 table=booking type=FLUSH } had an error.  Procedure booking-snapshot-20140912 { waiting=[hbase1.data.cn,60020,1407930968832, hbase45.data.cn,60020,1408609189376, hbase23.data.cn,60020,1407930978740, hbase37.data.cn,60020,1408608587411, hbase46.data.cn,60020,1408609190515, hbase6.data.cn,60020,1407930958926, hbase44.data.cn,60020,1408609188252, hbase7.data.cn,60020,1407930960021, hbase49.data.cn,60020,1408609193897, hbase47.data.cn,60020,1408609191647, hbase21.data.cn,60020,1407930976874, hbase39.data.cn,60020,1408608669063, hbase13.data.cn,60020,1407930966976, hbase15.data.cn,60020,1407930969235, hbase19.data.cn,60020,1407930973863, hbase16.data.cn,60020,1407930971152, hbase18.data.cn,60020,1407930972762, hbase43.data.cn,60020,1408609187126, hbase12.data.cn,60020,1407930966365, hbase10.data.cn,60020,1407930963512, hbase3.data.cn,60020,1407930955378, hbase11.data.cn,60020,1407930965112, hbase24.data.cn,60020,1407930979654, hbase2.data.cn,60020,1407930954308, hbase9.data.cn,60020,1407930962354, hbase38.data.cn,60020,1408608663894, hbase40.data.cn,60020,1408608674240, hbase41.data.cn,60020,1408609184867, hbase4.data.cn,60020,1407930956670, hbase36.data.cn,60020,1408608406292, hbase17.data.cn,60020,1407930972505, hbase35.data.cn,60020,1408607982898, hbase20.data.cn,60020,1407930974993, hbase48.data.cn,60020,1408609192763, hbase22.data.cn,60020,1407930978159, hbase8.data.cn,60020,1407930961333] done=[] } 
04.    at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:342) 
05.    at org.apache.hadoop.hbase.master.HMaster.isSnapshotDone(HMaster.java:2905) 
06.    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:40494) 
07.    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012) 
08.    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98) 
09.    at org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73) 
10.    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
11.    at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
12.    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
13.    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
14.    at java.lang.Thread.run(Thread.java:744) 
15.Caused by: org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable via timer-java.util.Timer@69db0cb4:org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable: org.apache.hadoop.hbase.errorhandling.TimeoutException: Timeout elapsed! Source:Timeout caused Foreign Exception Start:1410453067992, End:1410453127992, diff:60000, max:60000 ms 
16.    at org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher.rethrowException(ForeignExceptionDispatcher.java:83) 
17.    at org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.rethrowExceptionIfFailed(TakeSnapshotHandler.java:320) 
18.    at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:332) 
19.    ... 10 more 
20.Caused by: org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable: org.apache.hadoop.hbase.errorhandling.TimeoutException: Timeout elapsed! Source:Timeout caused Foreign Exception Start:1410453067992, End:1410453127992, diff:60000, max:60000 ms 
21.    at org.apache.hadoop.hbase.errorhandling.TimeoutExceptionInjector$1.run(TimeoutExceptionInjector.java:70) 
22.    at java.util.TimerThread.mainLoop(Timer.java:555) 
23.    at java.util.TimerThread.run(Timer.java:505)

public void close() throws IOException {
if (this.closed) {
return;
}
flushCommits();
if (cleanupPoolOnClose) {
this.pool.shutdown();
}
if (cleanupConnectionOnClose) {
if (this.connection != null) {
this.connection.close();
}
}
this.closed = true;
}

Hadoop集群安装&HBase实验情状搭建 http://www.linuxidc.com/Linux/2013-04/83560.htm

HBase 的详实介绍:请点这里
HBase 的下载地址:请点这里

新生看了HTable和HAdmin的源代码才有一些线索

单机版搭建HBase意况图像和文字化教育程安详严整 http://www.linuxidc.com/Linux/2012-10/72959.htm

正文长久更新链接地址:http://www.linuxidc.com/Linux/2014-09/106541.htm

查看防火墙状态:service iptables status

HBase 的详尽介绍:请点这里
HBase 的下载地址:请点这里

(3)查看各机器节点的防火墙是不是关闭(可能安装防火墙开启,但对我们的内定端口开放,最棒是关门防火墙卡塔尔:

闭馆防火墙:chkconfig iptables off    #开机不运维防火墙服务

关门防火墙:systemctl stop firewalld.service


查阅防火墙状态:ufw status

分析:

这种未有到主机的路由难题普通了,平日依旧是namenode 与 datanode 主机名间本人互ping就ping不通,那一个可能率十分的小,因为都知道要确定保证master与slaves 节点是能健康通讯,所以都会检讨。那么最有极大大概正是防火墙未有安息,或许因为查看不出防火墙状态,所以误以为防火墙关闭了。

centos7.0(暗中同意是利用firewall作为防火墙,如果未改为iptables防火墙,使用以下命令查看和停业防火墙)

以下针对分化版本的Linux系统一检查查防火墙的图景,及倒闭防火墙:


Hadoop开端进程中境遇上边包车型的士主题素材:
2015-08-02 19:43:20,771 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
 /************************************************************
 STARTUP_MSG: Starting DataNode
 STARTUP_MSG:  host = slave1/192.168.198.21
 STARTUP_MSG:  args = []
 STARTUP_MSG:  version = 1.2.1
 STARTUP_MSG:  build = -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
 STARTUP_MSG:  java = 1.7.0_79
 ************************************************************/
 2015-08-02 19:43:20,902 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
 2015-08-02 19:43:20,910 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
 2015-08-02 19:43:20,911 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
 2015-08-02 19:43:20,911 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
 2015-08-02 19:43:21,033 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
 2015-08-02 19:43:21,036 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
 2015-08-02 19:43:30,237 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.198.20:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 2015-08-02 19:43:31,239 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.198.20:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 二〇一五-08-02 19:43:31,247 E奥德赛ROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call to master/192.168.198.20:9000 failed on local exception: java.net.NoRouteToHostException: 未有到主机的路由
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1150)
at org.apache.hadoop.ipc.Client.call(Client.java:1118)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy3.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:414)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:392)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:374)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:453)
at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:335)
at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:300)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
 Caused by: java.net.NoRouteToHostException: 未有到主机的路由

 解决方案:
(1)从namenode主机ping其余slaves节点的主机名(注意是slaves节点的主机名卡塔尔国,借使ping不通,原因或然是namenode节点的/etc/hosts 未布置主机名与IP地址的投射关系,补全主机名与IP地址的映射关系。

Hadoop+ZooKeeper+HBase集群配置 http://www.linuxidc.com/Linux/2013-06/86347.htm

HBase 结点之间时间分化等招致regionserver运行败北 http://www.linuxidc.com/Linux/2013-06/86655.htm

本文由js333发布于计算机互联网,转载请注明出处:金沙js333娱乐场出现异常的处理方法,HBase多线程

关键词:

上一篇:金沙js333娱乐场:Nagios监控mysql服务器详细实现过

下一篇:5新增加硬盘挂载并实现开机自动挂载,4上多路径