博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
hadoop动态添加datanode启动失败的经验
阅读量:6233 次
发布时间:2019-06-21

本文共 6444 字,大约阅读时间需要 21 分钟。

动态添加datanode节点,主机名node14.cn

shell>hadoop-daemon.sh start datanode
shell>jps #查看datanode进程是否已启动
发现DataNode进程启动后立即消失,查询日志发现一下记录:

2018-04-15 00:08:43,158 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]2018-04-15 00:08:43,168 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []2018-04-15 00:08:43,673 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties2018-04-15 00:08:43,837 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).2018-04-15 00:08:43,837 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started2018-04-15 00:08:43,839 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is node11.cn:90002018-04-15 00:08:44,138 WARN org.apache.hadoop.fs.FileSystem: "node11.cn:9000" is a deprecated filesystem name. Use "hdfs://node11.cn:9000/" instead.2018-04-15 00:08:44,196 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://node11.cn:90012018-04-15 00:08:44,266 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog2018-04-15 00:08:44,273 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined2018-04-15 00:08:44,293 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)2018-04-15 00:08:44,298 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs2018-04-15 00:08:44,298 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs2018-04-15 00:08:44,298 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static2018-04-15 00:08:44,374 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)2018-04-15 00:08:44,377 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*2018-04-15 00:08:44,411 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOExceptionjava.net.BindException: Port in use: node11.cn:9001        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:892)        at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:828)        at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)        at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:706)        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:593)        at org.apache.hadoop.hdfs.server.namenode.NameNode.
(NameNode.java:765) at org.apache.hadoop.hdfs.server.namenode.NameNode.
(NameNode.java:749) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1446) Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:887) ... 8 more2018-04-15 00:08:44,414 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...2018-04-15 00:08:44,415 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.

2018-04-15 00:08:44,415 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.

2018-04-15 00:08:44,415 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.

java.net.BindException: Port in use: node11.cn:9001        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:892)        at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:828)        at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)        at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:706)        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:593)        at org.apache.hadoop.hdfs.server.namenode.NameNode.
(NameNode.java:765) at org.apache.hadoop.hdfs.server.namenode.NameNode.
(NameNode.java:749) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1446) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1512)Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:887) ... 8 more2018-04-15 00:08:44,423 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 12018-04-15 00:08:44,426 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:/************************************************************SHUTDOWN_MSG: Shutting down NameNode at node14.cn/192.168.74.114************************************************************/

解决方式:

删除dfs目录下的内容重新执行一下命令即可
shell>rm -rf dfs/
shell>hadoop-daemon.sh start datanode
shell>yarn-daemon.sh start nodemanager

刷新nanenode节点

shell>hdfs dfsadmin -refreshNodes
shell>start-balancer.sh
新增datanode成功,
将数据分发到新增datanode节点主机上
shell>hadoop balancer -threshold 10 #50控制磁盘使用率的参数,数值越小,各个节点磁盘使用率越均衡

转载于:https://blog.51cto.com/maoxiaoxiong/2103543

你可能感兴趣的文章
网络协议
查看>>
同源策略
查看>>
Date——时间戳转化为YYYY-MM-DD h:m:s时间格式
查看>>
MySQL_PHP学习笔记_2015_0907_PHP用pdo连接数据库时报错 could not find driver
查看>>
字符类型
查看>>
Algs4-1.1.5位于0与1之间则打印true,否则打印false
查看>>
分布式存储 FastDFS-5.0.5线上搭建
查看>>
[Java 基础]ResultSet 指定field映射到Pojo对象的Map
查看>>
Oracle 11g OCM 考试大纲
查看>>
华为 题目大数据计算器
查看>>
学会了怎么推矩阵啊哈哈哈哈哈
查看>>
web开篇
查看>>
day7CSS
查看>>
android中延迟执行某个任务
查看>>
蒲公英分布平台下载更新实现
查看>>
Mysql常用命令详解
查看>>
依赖注入的方式
查看>>
从VBA到Delphi
查看>>
将父类activity context传递给fragment
查看>>
eclipse中导入SVN项目步骤
查看>>