好库文摘 http://doc.okbase.net/ IsoAlgo3d - A PCF 3D Viewer for Desktop, Tablet and Smart phone http://doc.okbase.net/eryar/archive/265583.html eryar 2017/11/19 14:46:26

IsoAlgo3d - A PCF 3D Viewer for Desktop, Tablet and Smart phone

eryar@163.com

Abstract. IsoAlgo3d 通过将PCF三维可视化,并导出HTML文件。由于使用WebGL技术,所以只要设备支持HTML5,就可以不用安装任何软件或插件直接浏览PCF对应的全尺寸标注的管道三维模型,不管是电脑、平板还是智能手机。三维直接浏览管道模型,更加直观,避免了二维ISO图有时表达不清晰的情况,如环管出图,复杂管线分图后识图困难等,方便现场施工。

Key Words. IsoAlgo3d, 管道ISO图,三维管道图,WebGL, 平板,手机

PCF is short for Piping Component File, a de facto standard for piping data exchange. Commonly used for automated isometric generation, material control and pipe stress analysis import.

IsoAlgo can read PCF and generate isometric drawing automatically.

For more info about IsoAlgo, please visit:

http://www.cppblog.com/eryar/archive/2014/04/27/IsoAlgo.html

IsoAlgo3d can read PCF and show pipeline in 3d, than export full dimension and material tag in 3D PDF and HTML. For HTML, because used WebGL, so the device support HTML5(WebGL), then it will show 3d pipeline model directly without install any software or plugin. 

 

 If your device support HTML5, you will see the following 3d model:

  ]]> 管道设计CAD系统中重量重心计算 http://doc.okbase.net/eryar/archive/265582.html eryar 2017/11/19 14:46:07

管道设计CAD系统中重量重心计算

eryar@163.com

Abstract. 管道设计CAD系统中都有涉及到重量重心计算的功能,这个功能得到的重心数据主要用于托盘式造船时方便根据重心设置吊装配件。重量信息主要用于采购订货。本文主要介绍相关软件中重量重心功能,及重量重心计算实现原理。最后结合OpenCASCADE计算管道模型重量重心来验证。

Key Words. CoG, CentreOfMass, Piping CAD, Piping Design

1.Introduction

船舶设计系统和工厂设计系统等都有管道设计功能。在船舶建造过程中,工程最大的是船体建造,其次是船舶管系的制造和安装。

建国初期船舶管系的管子制造和安装要等船体基本成形,机械设备都基本到位,船船管工才能拿着原理图和详细设计的管子走向图,到施工现场用直径6mm或8mm的铁丝样条取样,把样条弯制成想要的形状,再带回车间下料,上弯管机按样条形状进行弯制,然后领管接头,再上船到施工现场进行管子试装。试装时管接头与管子用点焊连接,再把试装管子拆下来,带回车间打磨,泵压,表面处理,最后上船安装。这种方法叫管子制造现场取样法,这种工艺方法建造周期长;管工的劳动强度大;管子走向不合理,与其他专业如电缆,风管等容易碰撞,返工量大,造成人力物力浪费;管子现场试装的作业环境比车间差,造成管子建造质量差。

为了缩短建造周期,提高造船质量,单从船舶管系这个角度讲,必须要有创新去提高效率。

60~70年代,在木地板上以1:1的比例画各种船体背景,画各种机械设备外形及管路接口,进行管子系统放样。当时用的计算工具是计标尺,这种方法需要的工作场地大,放样人员蹲在地上进行操作,劳动强度很大。

70~80年代,在工作台上用长涤伦薄膜以1:10的比例画船体背景,画各种机械设备外形及与管路接口,进行管子系统放样,这种方法比上种工作场地小,减轻了放样人员的劳动强度。

80~90年代,把涤伦薄膜铺设在图板上,以1:20的比例分区综合放样。所谓综合放样,就是在小小的绘图板上,船体,电气,轮机三大专业的放样设计一起进行,综合协调,把很多将会在生产中出现的问题,在绘图板上解决。在这个舞台上,放样人员按建造方针,管理部门和生产车间的要求,提供各种建造阶段的施工图纸和托盘。由于当时没有采用计算机放样,有些好的设计要求,靠设计部门在有限的设计周期内很难实现,这个时期只是生产设计的初期阶段。

90年代后全国较大的造船厂家都用计算机放样,把设计图中的管子走向数据、管件数据、管路数据等输入计算机。通过计算机辅助设计,解决了管子零件弯管程序计算量很大的难题和出图量很大的难题,大缩短生产设计的周期,提高了设计质量。

现在船厂、设计院都是采用计算机辅助设计系统进行管道设计。从上述建造方法的进程可以看出通过创新,利用计算机这个强大工具改进了生产方式,提高了效率和质量。

2.Cog in PDMS/AVEVA Marine

在PDMS/AVEVA Marine中提供了计算模型重量重心Weight and Centre of Gravity(CoG)及表面积的功能。如下图所示:

重量重心的功能是在托盘造船、模块造船的基础上产生的需求。有了重量重心数据,就可以便于组织托盘、模块的吊装。

PDMS/AVEVA Marine中统计重量重心很简单,只需要添加需要统计的SITE、ZONE或者PIPE,BRANCH就可以计算出重量重心。

在PDMS/AVEVA Marine中统计重量前,需要完善管件库与特性库的关联。主要需要定义管件的质量及管子的线密度信息。

有了管件的质量及管子的线密度数据,如何计算管道的重量重心呢?

3.Com in OpenCASCADE

OpenCASCADE中提供了计算几何体的全局属性的功能,可计算曲线、曲面或几何体的质量Mass,质心(CentreOfMass),转动惯量等。所以可以使用OpenCASCADE来计算重量重心。

下面来对管道模型的重量重心计算进行分析。地球表面或表面附近的物体会受到地心引力作用。物体的诸微元所受到的地心引力由于距离地心很远,可看成是一组平行力系。这组平行力系有一个合力,合力的大小称为物体的重力。合力的作用线有一个特性,即不论物体相对地球如何放置,合力作用线总会通过一点,这个点称为物体的重心。物体重心相对物体的位置不因物体空间位置而改变。

设在空间中有N个质点,它们分别位于点(x1,y2,z1),(x2,y2,z2),... (xn, yn, zn)处,质量分别为m1, m2, ... Mn。由力学可知,该质点系的重心坐标为:

根据重心的计算公式,结合管道模型的特点,可以做如下假设:

l 将管子附件看成一个质点,质点坐标为管子附件的空间位置,质量为管件的质量;

l 将管段长度乘以线密度得到质量后再将其看成一个质点,质点坐标为管段的中心点;

下面使用OpenCASCADE中的类来计算管道系统的质量和质心坐标。有了质量和质心,乘以重力加速度即可以得到重量重心。计算一段简单管道模型的重量重心,管道模型如下图所示。模型从下到上对应的坐标及质量如下:

l 法兰Flange: Position X 26104mm Y -11441mm Z 19246.184mm, weight 19.815kg

l 管段Tubi:起点坐标:Position X 26104mm Y -11441mm Z 19316.184mm

   终点坐标:Position X 26104mm Y -11441mm Z 21554.039mm

   线密度:0.0315 kg/m

l 三通Tee: Position X 26104mm Y -11441mm Z 21770.039mm, weight 11kg

l 管段Tubi: 起点坐标:Position X 26104mm Y -11441mm Z 21986.039mm

   终点坐标:Position X 26104mm Y -11441mm Z 22828.5mm

   线密度:0.0315 kg/m

l 法兰Flange: Position X 26104mm Y -11441mm Z 22898.5mm, weight 19.815kg

l 垫片Gasket:Position X 26104mm Y -11441mm Z 22898.5mm, weight 1.14kg

在AVEVA Marine中计算的总质量为:148.80kg,

重心坐标为:X 26104.00mm Y -11441.00mm Z 21074.10mm 

在OpenCASCADE中的计算代码如下:

/*
Copyright(C) 2017 Shing Liu(eryar@163.com)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files(the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and / or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions :
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
*/
#include <ElCLib.hxx>
#include <gce_MakeLin.hxx>
#include <GProp_GProps.hxx>
#include <GProp_PGProps.hxx>
#include <GProp_CelGProps.hxx>
#pragma comment(lib, "TKernel.lib")
#pragma comment(lib, "TKMath.lib")
#pragma comment(lib, "TKG2d.lib")
#pragma comment(lib, "TKG3d.lib")
#pragma comment(lib, "TKGeomBase.lib")
// Centre of Mass of pipeline model.
void testCom(void)
{
    GProp_GProps aTool;
    GProp_PGProps aCompProps;
    // add component as point.
    // add flange: Position X 26104mm Y -11441mm Z 19246.184mm, weight 19.815kg
    aCompProps.AddPoint(gp_Pnt(26104.0, -11441.0, 19246.184), 19.815);
    // add tee: Position X 26104mm Y -11441mm Z 21770.039mm, weight 11kg
    aCompProps.AddPoint(gp_Pnt(26104.0, -11441.0, 21770.039), 11.0);
    // add flange: Position X 26104mm Y -11441mm Z 22898.5mm, weight 19.815kg
    aCompProps.AddPoint(gp_Pnt(26104.0, -11441.0, 22898.5), 19.815);
    // add gasket: Position X 26104mm Y -11441mm Z 22898.5mm, weight 1.14kg
    aCompProps.AddPoint(gp_Pnt(26104.0, -11441.0, 22898.5), 1.14);
    aTool.Add(aCompProps);
    // add two pipe as line curve.
    // add tubi: 起点坐标:Position X 26104mm Y -11441mm Z 19316.184mm
    //           终点坐标:Position X 26104mm Y - 11441mm Z 21554.039mm
    //           线密度:0.0315 kg/m
    gp_Pnt aPs1(26104.0, -11441.0, 19316.184);
    gp_Pnt aPe1(26104.0, -11441.0, 21554.039);
    gp_Lin aLine1 = gce_MakeLin(aPs1, aPe1).Value();
    GProp_CelGProps aTubiProp1;
    aTubiProp1.Perform(aLine1, ElCLib::Parameter(aLine1, aPs1), ElCLib::Parameter(aLine1, aPe1));
    aTool.Add(aTubiProp1, 0.0315);
    // add tubi: 起点坐标:Position X 26104mm Y -11441mm Z 21986.039mm
    //           终点坐标:Position X 26104mm Y - 11441mm Z 22828.5mm
    //           线密度:0.0315 kg/m
    gp_Pnt aPs2(26104.0, -11441.0, 21986.039);
    gp_Pnt aPe2(26104.0, -11441.0, 22828.5);
    gp_Lin aLine2 = gce_MakeLin(aPs2, aPe2).Value();
    GProp_CelGProps aTubiProp2;
    aTubiProp2.Perform(aLine2, ElCLib::Parameter(aLine2, aPs2), ElCLib::Parameter(aLine2, aPe2));
    aTool.Add(aTubiProp2, 0.0315);
    gp_Pnt aPc = aTool.CentreOfMass();
    std::cout << "Mass: " << aTool.Mass() << std::endl;
    std::cout << "CentreOfMass: " << aPc.X() << ", " << aPc.Y() << ", " << aPc.Z() << std::endl;
}
int main(int argc, char* argv[])
{
    testCom();
    return 0;
}

计算结果如下图所示:

与在AVEVA Marine中计算结果一致。

4.Conclusion

在船舶设计CAD系统和工厂设计CAD系统中都有管道的辅助设计,其中都有统计管道模型重量重心的功能。根据重心的计算公式,将管道模型中的管件简化为质点,再利用公式直接计算。最后在OpenCASCADE中实现重心及总质量的计算,计算结果与AVEVA Marine一致。

OpenCASCADE还可以计算任意曲线、曲面的重心及质量、转动惯量等。这些功能又是如何实现的呢?这个问题留给大家思考。

通过使用OpenCASCADE的功能,可以体会其编程风格。

5.References

1.同济大学数学教研室. 高等数学(下册). 高等教育出版社

2.单辉祖, 谢传峰. 工程力学. 高等教育出版社

]]>
OpenCASCADE BRepMesh - 2D Delaunay Triangulation http://doc.okbase.net/eryar/archive/265581.html eryar 2017/11/19 14:45:54

OpenCASCADE BRepMesh - 2D Delaunay Triangulation

eryar@163.com

Abstract. OpenCASCADE package BRepMesh can compute the Delaunay’s triangulation with the algorithm of Watson. It can be used for 2d plane or on surface by meshing in UV parametric space. The blog focus on the usage of the triangulation tool to triangulate 2d points.

Key Words. BRepMesh, Delaunay Triangulation, 

1.Introduction

点集的三角剖分Triangulation主要用于几何数据的可视化,在所有的造型内核中都有三角剖分的功能,用来生成模型的网格数据交给图形接口,如OpenGL等来显示。OpenCASCADE中使用类BRepMesh_IncrementalMesh来将TopoDS_Shape进行三角剖分得到显示数据。其原理根据其名字可以这样解释,使用了增量算法,不停的剖分直到结果的三角形满足精度要求。

https://www.opencascade.com/content/brepmeshincremental-mesh-algorithm

OpenCASCADE的BRepMesh只能用于二维点集的三角剖分,所以对于任意曲面的三角剖分,可以对其参数空间UV使用增量算法进行剖分,直到最终的三角剖分满足显示精度要求,最后将参数空间UV映射回实际的三维模型空间。所以三角剖分的关键就成了寻找合理的剖分点,在尽量少的剖分点情况下,使剖分满足显示精度要求。

本文主要介绍如何使用OpenCASCADE中BRepMesh来对二维点集进行三角剖分,最后将剖分结果在Draw Test Harness中进行可视化,便于实时查看剖分结果。

2.Code Example

使用BRepMesh直接对二维点集进行三角剖分,代码如下所示:

/*
Copyright(C) 2017 Shing Liu(eryar@163.com)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files(the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and / or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions :
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
*/
#include <math_BullardGenerator.hxx>
#include <BRepMesh.hxx>
#include <BRepMesh_Delaun.hxx>
#include <BRepMesh_DataStructureOfDelaun.hxx>
#pragma comment(lib, "TKernel.lib")
#pragma comment(lib, "TKMath.lib")
#pragma comment(lib, "TKG2d.lib")
#pragma comment(lib, "TKG3d.lib")
#pragma comment(lib, "TKGeomBase.lib")
#pragma comment(lib, "TKGeomAlgo.lib")
#pragma comment(lib, "TKBRep.lib")
#pragma comment(lib, "TKTopAlgo.lib")
#pragma comment(lib, "TKMesh.lib")
void testMesh(Standard_Integer thePointCount)
{
    std::ofstream aTclFile("d:/mesh.tcl");
    math_BullardGenerator aRandom;
    BRepMesh::Array1OfVertexOfDelaun aVertices(1, thePointCount);
    for (Standard_Integer i = aVertices.Lower(); i <= aVertices.Upper(); ++i)
    {
        gp_XY aPoint;
        aPoint.SetX(aRandom.NextReal() * aVertices.Upper());
        aPoint.SetY(aRandom.NextReal() * aVertices.Upper());
        BRepMesh_Vertex aVertex(aPoint, i, BRepMesh_Frontier);
        aVertices.SetValue(i, aVertex);
        // output point to Draw Test Harness.
        aTclFile << "vpoint p" << i << " " << aPoint.X() << " " << aPoint.Y() << " 0" << std::endl;
    }
    BRepMesh_Delaun aDelaunay(aVertices);
    Handle(BRepMesh_DataStructureOfDelaun) aMeshStructure = aDelaunay.Result();
    const BRepMesh::MapOfInteger& aTriangles = aMeshStructure->ElementsOfDomain();
    BRepMesh::MapOfInteger::Iterator aTriangleIt(aTriangles);
    for (aTriangleIt; aTriangleIt.More(); aTriangleIt.Next())
    {
        const Standard_Integer aTriangleId = aTriangleIt.Key();
        const BRepMesh_Triangle& aCurrentTriangle = aMeshStructure->GetElement(aTriangleId);
        if (aCurrentTriangle.Movability() == BRepMesh_Deleted)
        {
            continue;
        }
        Standard_Integer aTriangleVerts[3];
        aMeshStructure->ElementNodes(aCurrentTriangle, aTriangleVerts);
        // output line to Draw Test Harness.
        aTclFile << "vline l" << aTriangleId << "1 p" << aTriangleVerts[0] << " p" << aTriangleVerts[1] << std::endl;
        aTclFile << "vline l" << aTriangleId << "2 p" << aTriangleVerts[1] << " p" << aTriangleVerts[2] << std::endl;
        aTclFile << "vline l" << aTriangleId << "3 p" << aTriangleVerts[2] << " p" << aTriangleVerts[0] << std::endl;
    }
    aTclFile.close();
}
int main(int argc, char* argv[])
{
    testMesh(500);
    return 0;
}

 

程序使用随机数据生成的点集进行三角剖分并将三角剖分结果输出到D盘mesh.tcl文件,在Draw Test Harness中导入mesh.tcl即可看到剖分结果,如下图所示:

3.Conclusion

BRepMesh可以对二维点集进行三角剖分,使用简单,只需要将点集传入类BRepMesh_Delaun即可。

将三角剖分结果生成Draw Test Harness脚本的方法,可以用来方便地将剖分结果可视化。自己开发程序的时候也可采用这个方法将造型的模型数据在Draw Test Harness中显示。

如果三角剖分的点集中有孔需要去除,OpenCASCADE应该也提供了这个功能,有待发掘。

]]>
CSDN这是要搞事???? http://doc.okbase.net/zdhsoft/archive/265580.html zdhsoft 2017/11/19 14:45:48

有时候会上传一些资源共享,以便他人之需,以前都是1c币的。有些是0c币的,但是最近好像有变化了。

首先,是没有0的选择了,最少2c币起步。

第二,随着被下载的次数增加,需要的c币从几何基数增长

每个c币要1RMB的样子,如我下面的git文件,被下载两次后,就从2c币变成12c币,合计12RMB。

http://download.csdn.net/download/zdhsoft/10042921


csdn这是要干什么呢?抢钱?

]]>
【lanproxy 介绍】 http://doc.okbase.net/gaojingsong/archive/265579.html gaojingsong 2017/11/19 14:43:55

lanproxy是一个将局域网个人电脑、服务器代理到公网的内网穿透工具,目前仅支持tcp流量转发,可支持任何tcp上层协议(访问内网网站、本地支付接口调试、ssh访问、远程桌面...)。目前市面上提供类似服务的有花生壳、TeamView、GoToMyCloud等等,但要使用第三方的公网服务器就必须为第三方付费,并且这些服务都有各种各样的限制,此外,由于数据包会流经第三方,因此对数据安全也是一大隐患。 


 

 

 

server配置

server的配置文件放置在conf目录中,配置 config.properties

server.bind=0.0.0.0

#与代理客户端通信端口
server.port=4900

#ssl相关配置
server.ssl.enable=true
server.ssl.bind=0.0.0.0
server.ssl.port=4993
server.ssl.jksPath=test.jks
server.ssl.keyStorePassword=123456
server.ssl.keyManagerPassword=123456
server.ssl.needsClientAuth=false

#WEB在线配置管理相关信息
config.server.bind=0.0.0.0
config.server.port=8090
config.admin.username=admin
config.admin.password=admin

代理配置,打开地址 http://ip:8090 ,使用上面配置中配置的用户名密码登录

 

 

 

client配置

client的配置文件放置在conf目录中,配置 config.properties

#与在proxy-server配置后台创建客户端时填写的秘钥保持一致;没有服务器可以登录 https://lanproxy.org/ 创建客户端获取秘钥
client.key=
ssl.enable=true
ssl.jksPath=test.jks
ssl.keyStorePassword=123456

#这里填写实际的proxy-server地址;没有服务器默认即可,自己有服务器的更换为自己的proxy-server(IP)地址
server.host=lp.thingsglobal.org

#proxy-server ssl默认端口4993,默认普通端口4900
#ssl.enable=true时这里填写ssl端口,ssl.enable=false时这里填写普通端口
server.port=4993
  • 大小: 36.6 KB
]]> 人工智能资料大全 http://doc.okbase.net/jjfat/archive/265578.html jjfat 2017/11/19 14:43:48

今天为大家收集了一些关于人工智能(AI)的教程、书籍、视频演讲和论文,希望大家能够喜欢~~

 


 

在线教程

  • 麻省理工学院人工智能视频教程 – 麻省理工人工智能课程
  • 人工智能入门 – 人工智能基础学习。Peter Norvig举办的课程
  • EdX 人工智能 – 此课程讲授人工智能计算机系统设计的基本概念和技术。
  • 人工智能中的计划 – 计划是人工智能系统的基础部分之一。在这个课程中,你将会学习到让机器人执行一系列动作所需要的基本算法。
  • 机器人人工智能 – 这个课程将会教授你实现人工智能的基本方法,包括:概率推算,计划和搜索,本地化,跟踪和控制,全部都是围绕有关机器人设计。
  • 机器学习 – 有指导和无指导情况下的基本机器学习算法
  • 机器学习中的神经网络 – 智能神经网络上的算法和实践经验
  • 斯坦福统计学习 -Introductory course on machine learning focusing on: linear and polynomial regression, logistic regression and linear discriminant analysis; cross-validation and the bootstrap, model selection and regularization methods (ridge and lasso); nonlinear models, splines and generalized additive models; tree-based methods, random forests and boosting; support-vector machines.

 

人工智能书籍

 

编程

 

人工智能原理

 

免费读物

 

程序代码

  • AIMA Lisp Source Code – “Artificial Intelligence A Modern Approach”一书中的Common Lisp源代码。

 

视频/演讲

 

机器学习

  • Deep Learning. Methods and Applications 来自微软研究室的免费读物。
  • Neural Networks and Deep Learning – Neural networks and deep learning currently provide the best solutions to many problems in image recognition, speech recognition, and natural language processing. This book will teach you the core concepts behind neural networks and deep learning
  • Machine Learning: A Probabilistic Perspective – 这本小书对机器学习给出了详尽的介绍
  • Deep Learning – Yoshua Bengio, Ian Goodfellow and Aaron Courville put together this currently free (and draft version) book on deep learning. The book is kept up-to-date and covers a wide range of topics in depth (up to and including sequence-to-sequence learning).

 

其它

 


 

如果你有什么收藏的人工智能相关的资料 ,大家一起分享,一起学习,也欢迎推荐给我们呦~~~

这里是igeekbar,欢迎每一位Geek 常来做客~~~大家有啥建议、意见,不要客气!欢迎给我留言哈~~~

]]>
AI正在打王者荣耀排位赛,背后是腾讯100亿开放新战略 http://doc.okbase.net/jjfat/archive/265577.html jjfat 2017/11/19 14:43:20

如果运气够好,在这个赛季的《王者荣耀》排位赛中,你会碰到腾讯的AI。

 

 

一个流传已久的消息,首次得以被证实。在腾讯全球合作伙伴大会上,腾讯副总裁林松涛透露,AI正在学习怎么打王者荣耀。

这个AI也许你不会陌生,它就是今年初叱咤围棋界的腾讯围棋AI:绝艺。今年3月,绝艺获得首个“围棋10段”,并且十连胜柯洁,随后还在第十届UEC杯计算机围棋大会获得冠军。

正在苦练《王者荣耀》的绝艺,未来还会挑战总冠军战队。

这只是腾讯全球合作伙伴大会的一个插曲。在这次的大会上,腾讯宣布开放战略步入全新阶段,发布内容和AI两大开放战略。

 

内容

腾讯COO任宇昕在大会上透露,腾讯2003年进入内容产业,经过十几年发展已经形成了泛娱乐的内容生态。

今天刚刚在香港IPO的阅文集团就是一个例子。(阅文股价开盘后飙涨90%)

而腾讯今年的重点,就是打造一个新的企鹅号。而且通过企鹅号一点接入,内容就会被腾讯平台各个渠道进行分发。例如微信看一看、QQ看点、天天快报、QQ浏览器等。腾讯表示这是一个日均100亿的流量分发体系。

 

 

以及,明年腾讯还会为企鹅号投入100亿元人民币的支持,帮助企鹅号完成内容生态升级。具体的方式包括:

头部公司:通过IP培育、专项投资等方式帮助打通产业全链条

腰部公司:整合腾讯线上线下资源,用好我们开放平台的流量、开创空间和众创空间的能力,分领域、地域对这些腰部空间进行扶持

长尾公司:提高分成规模

此外还有100亿产业资源等投入扶持。总而言之就是这样的一张图。

 

AI

 

“AI in all”,腾讯COO任宇昕在这次的大会上表示,很多公司谈到未来说All in AI,但腾讯的战略是让AI无处不在。

腾讯内部已经构建起三大实验室,共同构建AI生态。其中包括:AI Lab、优图实验室、微信AI,腾讯还会通过开放平台,连接三大实验室与外界资源。

 

 

任宇昕表示腾讯把AI等技术当做战略重点看待。他强调说腾讯的AI技术,不仅服务自己,而且还要对外开放,服务全行业。

 

 

除了自己投入AI的研发,腾讯也通过投资、AI加速器等途径,扶持了很多国内外的AI初创企业成长。腾讯也表示,已经在各个AI赛道上都有布局。

据林松涛透露,腾讯的AI能力,已在社交、内容、游戏、医疗、零售、金融、安防、翻译等八大场景落地。

此外,任宇昕还在开场阶段,提到了“智慧零售”做了进一步解读。

“腾讯并不是要大举进军电商。商家完全不必担心客户分流的问题”任宇昕表示,腾讯将提供强大的场景、大数据、AI技术支持,以及腾讯全产品线,帮助商家量身定做解决方案,帮助线下门店实现数据化和智能化。

 

 

]]>
java.io.IOException 断开的管道 解决方法 ClientAbortException: java.io.IOException: Broken http://doc.okbase.net/gaoyaohuachina/archive/265576.html gaoyaohuachina 2017/11/19 14:43:13

今天公司技术支持的童鞋报告一个客户的服务不工作了,紧急求助,于是远程登陆上服务器排查问题。

    查看采集数据的tomcat日志,习惯性的先翻到日志的最后去查看有没有异常的打印,果然发现了好几种异常信息,但是最多还是这个:

[java] view plain copy
 
  1. 24-Nov-2016 09:54:21.116 SEVERE [http-nio-8081-Acceptor-0] org.apache.tomcat.util.net.NioEndpoint$Acceptor.run Socket accept failed  
  2.  java.io.IOException: Too many open files  
  3.     at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)  
  4.     at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241)  
  5.     at org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:688)  
  6.     at java.lang.Thread.run(Thread.java:745)  

    “Too manay open files” 问题很明显啊,文件描述符超出限制导致无法打开文件或创建网络连接,这个问题又会导致一些其它问题的产生,肯定是ulimit没有优化,于是检查ulimit的设置;

[plain] view plain copy
 
  1. [root@sdfassd logs]# ulimit -a  
  2. core file size          (blocks, -c) 0  
  3. data seg size           (kbytes, -d) unlimited  
  4. scheduling priority             (-e) 0  
  5. file size               (blocks, -f) unlimited  
  6. pending signals                 (-i) 62819  
  7. max locked memory       (kbytes, -l) 64  
  8. max memory size         (kbytes, -m) unlimited  
  9. open files                      (-n) 65535  
  10. pipe size            (512 bytes, -p) 8  
  11. POSIX message queues     (bytes, -q) 819200  
  12. real-time priority              (-r) 0  
  13. stack size              (kbytes, -s) 10240  
  14. cpu time               (seconds, -t) unlimited  
  15. max user processes              (-u) 62819  
  16. virtual memory          (kbytes, -v) unlimited  
  17. file locks                      (-x) unlimited  

 

     open files竟然是65535,已经做过了优化,是不是先启动的tomcat等服务,然后才对ulimit做的优化?有可能,这样的话重启一下服务就ok了,于是将全部服务重启了一遍,果然运行正常了,不一会报表就显示数据了,然后告诉技术支持,问题已经解决了,然后就去处理别的case了;

    结果还不到20分钟,技术支持说,报表又没有数据了,于是又打数据采集的应用的tomcat日志查看,发现了一堆异常,全都是一个错:

[java] view plain copy
 
  1. 24-Nov-2016 09:54:24.574 WARNING [http-nio-18088-exec-699] org.apache.catalina.core.StandardHostValve.throwable Exception Processing ErrorPage[exceptionType=java.lang.Throwable, location=/views/error/500.jsp]  
  2.  org.apache.catalina.connector.ClientAbortException: java.io.IOException: Broken pipe  
  3.     at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:393)  
  4.     at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:426)  
  5.     at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:342)  
  6.     at org.apache.catalina.connector.OutputBuffer.close(OutputBuffer.java:295)  
  7.     at org.apache.catalina.connector.Response.finishResponse(Response.java:453)  
  8.     at org.apache.catalina.core.StandardHostValve.throwable(StandardHostValve.java:378)  
  9.     at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:174)  
  10.     at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)  
  11.     at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:610)  
  12.     at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:610)  
  13.     at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)  
  14.     at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:537)  
  15.     at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1085)  
  16.     at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:658)  
  17.     at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:222)  
  18.     at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1556)  
  19.     at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1513)  
  20.     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)  
  21.     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)  
  22.     at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)  
  23.     at java.lang.Thread.run(Thread.java:745)  


    这个异常非常多,看报错信息,是tomcat的connector在执行写操作的时候发生了Broken pipe异常,connector是tomcat处理网络请求的,难道是网络出问题了,但是为什么发生异常的都是写,读就没问题呢?为了判断是不是网络问题,于是用wget命令访问了一下服务器的一个接口,结果发现等了好久都没有响应,正常情况下应该是马上就有响应的,这说明不是网络的原因,是服务器的问题,又用命令查看了下当前tcpip连接的状态:

[plain] view plain copy
 
  1. [root@sdfassd logs]# netstat -n | awk '/^tcp/ {++state[$NF]} END {for(key in state) print key,"\t",state[key]}'  
  2. CLOSE_WAIT        3853  
  3. TIME_WAIT         40  
  4. ESTABLISHED       285  
  5. LAST_ACT          6  


    CLOSE_WAIT 状态的连接竟然有3853个,这太不正常了,这说明是客户端先关闭了连接,服务器端没有执行关闭连接的操作,导致服务器端一直维持在CLOSE_WAIT的状态,如果不对操作系统的keepalive做优化,这个状态默认会维持两个小时,查看了下系统的设置:

[plain] view plain copy
 
  1. [root@sdfassd logs]# sysctl -a |grep keepalive  
  2. net.ipv4.tcp_keepalive_time = 7200  
  3. net.ipv4.tcp_keepalive_probes = 9  
  4. net.ipv4.tcp_keepalive_intvl = 75  

    果然是7200秒,这就解释通了,为什么第一次查看tomcat日志最后报错都是“Too manay open files”异常,一定是在两个小时内,close_wait状态暴增,导致文件描述符超过了65535的最大限制;

    而这个状态应该就是broken pipe 异常导致的,是什么导致的broken pipe异常呢?为什么探针关闭了连接,但是数据采集服务器却没有关闭连接?报异常的是tomcat的connector,tomcat不可能会忘记调用close方法去关闭连接,排除了程序的问题,也想不出来是什么导致的了;

    于是去拿了往采集服务器上传数据的探针的日志查看,竟然有大量的一个异常:

[plain] view plain copy
 
  1. 2016-11-24 16:27:36,217 [TingYun Harvest Service 1] 166 WARN  - Error occurred sending metric data to TingYun. There can be intermittent connection failures. Please wait for a short period of time: java.net.SocketTimeoutException: Read timed out  
  2. java.net.SocketTimeoutException: Read timed out  
  3.     at java.net.SocketInputStream.socketRead0(Native Method) ~[na:1.7.0_60]  
  4.     at java.net.SocketInputStream.read(SocketInputStream.java:152) ~[na:1.7.0_60]  
  5.     at java.net.SocketInputStream.read(SocketInputStream.java:122) ~[na:1.7.0_60]  
  6.     at com.tingyun.agent.libs.org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SourceFile:136) ~[tingyun-agent-java.jar:2.1.3]  
  7.         .................  

    都是read time out异常,那么问题就明确了,  是探针端读取超时了,断开了连接,而这时候数据采集服务器还在处理请求,它并不知道探针端已经断开了连接,处理完请求后再将处理结果发给探针,就broken pipe了;

    原来这个异常是客户端读取超时关闭了连接,这时候服务器端再向客户端已经断开的连接写数据时就发生了broken pipe异常!

 

    探针读超时的时间是2分钟,服务器为什么这么长的时间都没有响应呢?于是使用jstack命令导出了tomcat的线程栈信息进行分析,最后发现代码中有耗时的操作加了锁,导致线程阻塞(保密原因,在这里就不贴代码了);

]]>
工作10多年的感想 http://doc.okbase.net/男人50/archive/265575.html 男人50 2017/11/19 14:43:06 如今,在it行业摸爬滚打10多年了。
从传统行业软件  到   互联网的软件,
从互联网 到 移动互联网;
从移动互联网 到 大数据,云计算;
现在 还得学习AI。
一刻都没停息过,难道真的要活到老,学到老。
本人酷爱技术,虽然同事从70后,到80后
又从80后 到90后
又从90后 到95后,
再过几年,我的小侄女 00后 就过来和我做同事了。
再过几年,我的孩子,10后,就来当同事了,甚至可以做你的领导。
技术 就这样。身边的同事 一直再变的越来越年轻。
咱们是继续做技术呢?  还是做管理呢?还是创业?  还是换行?
其实 很多人 选择了 做管理,在一个公司耗着。
软件公司的管理 都很扁平化,还是技术说了算,管理太虚了。
做技术也是其乐无穷,安静的写个代码,不用天天开会,开会真烦人,自己一无所获,各个部门的pk,纯属 浪费生命。
创业呢? 确实很累,还得有好想法,有融资,也是水很深啊
换行也行,回去种地,养猪,承包大片地,走低端路线。也许挣钱不少。
总之,摆在面前的n条路,看你怎么看,怎么选择?
不知道大家有什么想法,可以留言,我也是一脸迷茫。 ]]>
(完整,亲测)docker hadoop 2.7.1 http://doc.okbase.net/knight_black_bob/archive/265574.html knight_black_bob 2017/11/19 14:42:50

 

 

0.准备工作

下载  centos

[root@bogon soft]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
docker.io/centos    latest              d123f4e55e12        7 days ago          196.6 MB

 

 1.创建centos-ssh-root

1.1 创建 centos-ssh-root dockerfile

注意:

这里面 我们先安装 了 vim,本人喜欢vim,不喜欢vi

    先安装which ,后面 hadoop format 需要用到

 

# 选择一个已有的os镜像作为基a础  
FROM docker.io/centos

# 镜像的作者  
MAINTAINER baoyou curiousby

# 安装openssh-server和sudo软件包,并且将sshd的UsePAM参数设置成no  
RUN yum install -y openssh-server sudo
RUN sed -i 's/UsePAM yes/UsePAM no/g' /etc/ssh/sshd_config
#安装openssh-clients
RUN yum  install -y openssh-clients

RUN yum install -y vim
RUN yum install -y which

# 添加测试用户root,密码root,并且将此用户添加到sudoers里  
RUN echo "root:root" | chpasswd
RUN echo "root   ALL=(ALL)       ALL" >> /etc/sudoers
# 下面这两句比较特殊,在centos6上必须要有,否则创建出来的容器sshd不能登录  
RUN ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key
RUN ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key

# 启动sshd服务并且暴露22端口  
RUN mkdir /var/run/sshd
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

 1.2 bulid

docker build -t baoyou/centos-ssh-root .

 

1.3 bulid 日志

[root@bogon soft]# mkdir centos-ssh-root
[root@bogon soft]# ls
centos-ssh-root
[root@bogon soft]# cd centos-ssh-root/
[root@bogon centos-ssh-root]# ls
[root@bogon centos-ssh-root]# vim Dockerfile
[root@bogon centos-ssh-root]# docker build -t baoyou/centos-ssh-root .
Sending build context to Docker daemon  2.56 kB
Step 1 : FROM docker.io/centos
 ---> d123f4e55e12
Step 2 : MAINTAINER baoyou curiousby
 ---> Running in 4935d9a8417c
 ---> a526aade20a6
Removing intermediate container 4935d9a8417c
Step 3 : RUN yum install -y openssh-server sudo
 ---> Running in f0c0f9d82f34
Loaded plugins: fastestmirror, ovl
Determining fastest mirrors
 * base: mirrors.btte.net
 * extras: mirrors.btte.net
 * updates: mirrors.btte.net
Resolving Dependencies
--> Running transaction check
---> Package openssh-server.x86_64 0:7.4p1-13.el7_4 will be installed
--> Processing Dependency: openssh = 7.4p1-13.el7_4 for package: openssh-server-7.4p1-13.el7_4.x86_64
--> Processing Dependency: fipscheck-lib(x86-64) >= 1.3.0 for package: openssh-server-7.4p1-13.el7_4.x86_64
--> Processing Dependency: libwrap.so.0()(64bit) for package: openssh-server-7.4p1-13.el7_4.x86_64
--> Processing Dependency: libfipscheck.so.1()(64bit) for package: openssh-server-7.4p1-13.el7_4.x86_64
---> Package sudo.x86_64 0:1.8.19p2-11.el7_4 will be installed
--> Running transaction check
---> Package fipscheck-lib.x86_64 0:1.4.1-6.el7 will be installed
--> Processing Dependency: /usr/bin/fipscheck for package: fipscheck-lib-1.4.1-6.el7.x86_64
---> Package openssh.x86_64 0:7.4p1-13.el7_4 will be installed
---> Package tcp_wrappers-libs.x86_64 0:7.6-77.el7 will be installed
--> Running transaction check
---> Package fipscheck.x86_64 0:1.4.1-6.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package                Arch        Version                  Repository    Size
================================================================================
Installing:
 openssh-server         x86_64      7.4p1-13.el7_4           updates      458 k
 sudo                   x86_64      1.8.19p2-11.el7_4        updates      1.1 M
Installing for dependencies:
 fipscheck              x86_64      1.4.1-6.el7              base          21 k
 fipscheck-lib          x86_64      1.4.1-6.el7              base          11 k
 openssh                x86_64      7.4p1-13.el7_4           updates      509 k
 tcp_wrappers-libs      x86_64      7.6-77.el7               base          66 k

Transaction Summary
================================================================================
Install  2 Packages (+4 Dependent packages)

Total download size: 2.1 M
Installed size: 6.9 M
Downloading packages:
warning: /var/cache/yum/x86_64/7/base/packages/fipscheck-1.4.1-6.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Public key for fipscheck-1.4.1-6.el7.x86_64.rpm is not installed
Public key for openssh-7.4p1-13.el7_4.x86_64.rpm is not installed
--------------------------------------------------------------------------------
Total                                              404 kB/s | 2.1 MB  00:05     
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Importing GPG key 0xF4A80EB5:
 Userid     : "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>"
 Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
 Package    : centos-release-7-4.1708.el7.centos.x86_64 (@CentOS)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : fipscheck-1.4.1-6.el7.x86_64                                 1/6 
  Installing : fipscheck-lib-1.4.1-6.el7.x86_64                             2/6 
  Installing : openssh-7.4p1-13.el7_4.x86_64                                3/6 
  Installing : tcp_wrappers-libs-7.6-77.el7.x86_64                          4/6 
  Installing : openssh-server-7.4p1-13.el7_4.x86_64                         5/6 
  Installing : sudo-1.8.19p2-11.el7_4.x86_64                                6/6 
  Verifying  : fipscheck-lib-1.4.1-6.el7.x86_64                             1/6 
  Verifying  : tcp_wrappers-libs-7.6-77.el7.x86_64                          2/6 
  Verifying  : fipscheck-1.4.1-6.el7.x86_64                                 3/6 
  Verifying  : openssh-7.4p1-13.el7_4.x86_64                                4/6 
  Verifying  : openssh-server-7.4p1-13.el7_4.x86_64                         5/6 
  Verifying  : sudo-1.8.19p2-11.el7_4.x86_64                                6/6 

Installed:
  openssh-server.x86_64 0:7.4p1-13.el7_4     sudo.x86_64 0:1.8.19p2-11.el7_4    

Dependency Installed:
  fipscheck.x86_64 0:1.4.1-6.el7      fipscheck-lib.x86_64 0:1.4.1-6.el7       
  openssh.x86_64 0:7.4p1-13.el7_4     tcp_wrappers-libs.x86_64 0:7.6-77.el7    

Complete!
 ---> b9b2d9d28e91
Removing intermediate container f0c0f9d82f34
Step 4 : RUN sed -i 's/UsePAM yes/UsePAM no/g' /etc/ssh/sshd_config
 ---> Running in da4de0cafd82
 ---> 4af5db8b4cef
Removing intermediate container da4de0cafd82
Step 5 : RUN yum  install -y openssh-clients
 ---> Running in 68a2fdd224d1
Loaded plugins: fastestmirror, ovl
Loading mirror speeds from cached hostfile
 * base: mirrors.btte.net
 * extras: mirrors.btte.net
 * updates: mirrors.btte.net
Resolving Dependencies
--> Running transaction check
---> Package openssh-clients.x86_64 0:7.4p1-13.el7_4 will be installed
--> Processing Dependency: libedit.so.0()(64bit) for package: openssh-clients-7.4p1-13.el7_4.x86_64
--> Running transaction check
---> Package libedit.x86_64 0:3.0-12.20121213cvs.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package             Arch       Version                       Repository   Size
================================================================================
Installing:
 openssh-clients     x86_64     7.4p1-13.el7_4                updates     654 k
Installing for dependencies:
 libedit             x86_64     3.0-12.20121213cvs.el7        base         92 k

Transaction Summary
================================================================================
Install  1 Package (+1 Dependent package)

Total download size: 746 k
Installed size: 2.8 M
Downloading packages:
--------------------------------------------------------------------------------
Total                                              384 kB/s | 746 kB  00:01     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : libedit-3.0-12.20121213cvs.el7.x86_64                        1/2 
  Installing : openssh-clients-7.4p1-13.el7_4.x86_64                        2/2 
  Verifying  : libedit-3.0-12.20121213cvs.el7.x86_64                        1/2 
  Verifying  : openssh-clients-7.4p1-13.el7_4.x86_64                        2/2 

Installed:
  openssh-clients.x86_64 0:7.4p1-13.el7_4                                       

Dependency Installed:
  libedit.x86_64 0:3.0-12.20121213cvs.el7                                       

Complete!
 ---> 5a68ae327b7b
Removing intermediate container 68a2fdd224d1
Step 6 : RUN echo "root:root" | chpasswd
 ---> Running in 2ae8f5835434
 ---> e5b5e9580789
Removing intermediate container 2ae8f5835434
Step 7 : RUN echo "root   ALL=(ALL)       ALL" >> /etc/sudoers
 ---> Running in b415558a8bc6
 ---> ca06f821d868
Removing intermediate container b415558a8bc6
Step 8 : RUN ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key
 ---> Running in 7255f91f09b9
Enter passphrase (empty for no passphrase): Enter same passphrase again: Generating public/private dsa key pair.
Your identification has been saved in /etc/ssh/ssh_host_dsa_key.
Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub.
The key fingerprint is:
SHA256:uAAlx5f2WnMrlIQy3JPw9Zz/9HnD7MVvblLFaIZzKQE root@4935d9a8417c
The key's randomart image is:
+---[DSA 1024]----+
|  .o+o +. E.     |
|   +=.O..o ..    |
|  .  =.+ .+  o + |
|   .   .* ..+ * o|
|    . .+So ..*. .|
|     .... .  oooo|
|      .  .    .*=|
|              o B|
|               *o|
+----[SHA256]-----+
 ---> 36317be611b0
Removing intermediate container 7255f91f09b9
Step 9 : RUN ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key
 ---> Running in 1b3495d71562
Enter passphrase (empty for no passphrase): Enter same passphrase again: Generating public/private rsa key pair.
Your identification has been saved in /etc/ssh/ssh_host_rsa_key.
Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub.
The key fingerprint is:
SHA256:QksGOHmxudCZg1cIDHGJvhnTNhnULXvtdFKbKNoDh9w root@4935d9a8417c
The key's randomart image is:
+---[RSA 2048]----+
|o=+*+oo          |
|..*+oX .   .     |
|. +o% O . o o    |
| + B @ E = +     |
|  * o O S o      |
| o   . + .       |
|        .        |
|                 |
|                 |
+----[SHA256]-----+
 ---> d53cd418ff85
Removing intermediate container 1b3495d71562
Step 10 : RUN mkdir /var/run/sshd
 ---> Running in d3e71c08fd28
 ---> 995e7295beea
Removing intermediate container d3e71c08fd28
Step 11 : EXPOSE 22
 ---> Running in ff7e2cc7c67f
 ---> 3dfc9a6efd6a
Removing intermediate container ff7e2cc7c67f
Step 12 : CMD /usr/sbin/sshd -D
 ---> Running in 81478a7d9251
 ---> 45ef8b6b8254
Removing intermediate container 81478a7d9251
Successfully built 45ef8b6b8254
[root@bogon centos-ssh-root]# docker images
REPOSITORY               TAG                 IMAGE ID            CREATED              SIZE
baoyou/centos-ssh-root   latest              45ef8b6b8254        About a minute ago   303.5 MB
docker.io/centos         latest              d123f4e55e12        7 days ago           196.6 MB

 

 2.创建 centos-ssh-root-java

 2.1 创建 centos-ssh-root-java Dockerfile

FROM baoyou/centos-ssh-root
ADD jdk-7u79-linux-x64.tar.gz  /usr/local/
RUN mv /usr/local/jdk1.7.0_79 /usr/local/jdk1.7
ENV JAVA_HOME /usr/local/jdk1.7
ENV PATH $JAVA_HOME/bin:$PATH

 

 2.2 bulid

 

docker build -t baoyou/centos-ssh-root-java .
 

 

2.3 bulid  日志

 

[root@bogon centos-ssh-root-java]# vim Dockerfile
[root@bogon centos-ssh-root-java]# docker build -t baoyou/centos-ssh-root-java .
Sending build context to Docker daemon 153.5 MB
Step 1 : FROM baoyou/centos-ssh-root
 ---> 45ef8b6b8254
Step 2 : ADD jdk-7u79-linux-x64.tar.gz /usr/local/
 ---> 82d01ceb0da3
Removing intermediate container 32af4ac32299
Step 3 : RUN mv /usr/local/jdk1.7.0_79 /usr/local/jdk1.9
 ---> Running in 2209bd55cef1
 ---> b44bad4a8dcb
Removing intermediate container 2209bd55cef1
Step 4 : ENV JAVA_HOME /usr/local/jdk1.9
 ---> Running in 6f938ad9bfda
 ---> 71e298d66485
Removing intermediate container 6f938ad9bfda
Step 5 : ENV PATH $JAVA_HOME/bin:$PATH
 ---> Running in e89392b2b788
 ---> 0213bbd4d724
Removing intermediate container e89392b2b788
Successfully built 0213bbd4d724
 

 

 3.创建 centos-ssh-root-java-hadoop

 3.1 .创建 centos-ssh-root-java-hadoop Dockerfile

 

FROM baoyou/centos-ssh-root-java
ADD hadoop-2.7.1.tar.gz /usr/local
RUN mv /usr/local/hadoop-2.7.1 /usr/local/hadoop
ENV HADOOP_HOME /usr/local/hadoop
ENV PATH $HADOOP_HOME/bin:$PATH
 

 

3.2 bulid

 

docker build -t baoyou/centos-ssh-root-java-hadoop .
 

 

 3.3 bulid 日志

 

[root@bogon centos-ssh-root-java-hadoop]# docker build -t baoyou/centos-ssh-root-java-hadoop .
Sending build context to Docker daemon 547.1 MB
Step 1 : FROM baoyou/centos-ssh-root-java
 ---> 652fc71facfd
Step 2 : ADD hadoop-2.7.1.tar.gz /usr/local
 ---> 55951fc3fdc1
Removing intermediate container f0912988a29b
Step 3 : RUN mv /usr/local/hadoop-2.7.1 /usr/local/hadoop
 ---> Running in d8afac1e59d9
 ---> 56d463beea25
Removing intermediate container d8afac1e59d9
Step 4 : ENV HADOOP_HOME /usr/local/hadoop
 ---> Running in 27ed5fad8981
 ---> 526d79c016fc
Removing intermediate container 27ed5fad8981
Step 5 : ENV PATH $HADOOP_HOME/bin:$PATH
 ---> Running in c238304b499c
 ---> 284dcc575add
Removing intermediate container c238304b499c
Successfully built 284dcc575add
 

 

 3.4 docker images 

 

[root@bogon centos-ssh-root-java-hadoop]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
baoyou/centos-ssh-root-java-hadoop   latest              966719de6484        7 seconds ago       1.385 GB
baoyou/centos-ssh-root-java          latest              0213bbd4d724        42 minutes ago      916.1 MB
baoyou/centos-ssh-root               latest              45ef8b6b8254        46 minutes ago      303.5 MB
docker.io/centos                     latest              d123f4e55e12        7 days ago          196.6 MB
 

 

 3.5. 启动 hadoop

 

docker run --name hadoop0 --hostname hadoop0 -d -P -p 50070:50070 -p 8088:8088 baoyou/centos-ssh-root-java-hadoop

docker run --name hadoop1 --hostname hadoop1 -d -P  baoyou/centos-ssh-root-java-hadoop

docker run --name hadoop2 --hostname hadoop2 -d -P  baoyou/centos-ssh-root-java-hadoop
 

 

 3.6 docker ps

 

[root@bogon centos-ssh-root-java-hadoop]# docker ps
CONTAINER ID        IMAGE                                COMMAND               CREATED             STATUS              PORTS                                                                     NAMES
8f73f52e8cc1        baoyou/centos-ssh-root-java-hadoop   "/usr/sbin/sshd -D"   7 seconds ago       Up 6 seconds        0.0.0.0:32770->22/tcp                                                     hadoop2
4d553dbf7fbc        baoyou/centos-ssh-root-java-hadoop   "/usr/sbin/sshd -D"   15 seconds ago      Up 14 seconds       0.0.0.0:32769->22/tcp                                                     hadoop1
134a18b42c1a        baoyou/centos-ssh-root-java-hadoop   "/usr/sbin/sshd -D"   53 seconds ago      Up 51 seconds       0.0.0.0:8088->8088/tcp, 0.0.0.0:50070->50070/tcp, 0.0.0.0:32768->22/tcp   hadoop0
 

 

3.7 准备给容器设置固定IP

3.7.1 下载 pipwork 

下载地址:https://github.com/jpetazzo/pipework.git 

 

3.7.2 安装pipwork

 

unzip pipework-master.zip
mv pipework-master pipework
cp -rp pipework/pipework /usr/local/bin/ 
 

 

3.7.3 安装插件bridge-utils

 

yum -y install bridge-utils
 

 

 

3.7.4  brctl show  (查看存在  virbr0 ? 否在创建)

 

[root@bogon baoyou]# brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.024292a9ad4a	no		veth4dc65ee
							veth646bc14
							veth8e3aab5
virbr0		8000.16d3ac819517	yes		veth1pl3187
 

 

  ifconfig

virbr0 192.168.122.1

 

 

[root@bogon centos-ssh-root-java-hadoop]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 0.0.0.0
        ether 02:42:d7:fb:9c:a1  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.206.241  netmask 255.255.255.0  broadcast 192.168.206.255
        inet6 fe80::67a3:3777:46a8:8a2f  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:d2:b3:c2  txqueuelen 1000  (Ethernet)
        RX packets 1606  bytes 851375 (831.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 757  bytes 90712 (88.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:88:cb:23  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
 

 

 我这里已经存在 virbr0 ,没有virbr0就自己创建

brctl addbr virbr0
ip link set dev virbr0 up
ip addr add 192.168.122.1/24 dev virbr0

 

 本人无网络知识,这点对这个部分会理解吃力

 

3.7.5 分配 ip

pipework virbr0 hadoop0 192.168.122.10/24
pipework virbr0 hadoop1 192.168.122.11/24
pipework virbr0 hadoop2 192.168.122.12/24

 

3.7.6 修改虚拟机hosts

[root@bogon centos-ssh-root-java-hadoop]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.122.10   hadoop0
192.168.122.11    hadoop1
192.168.122.12    hadoop2

 

 

3.7.7 测试  ping 192.168.122.10

[root@bogon centos-ssh-root-java-hadoop]# ping hadoop0
PING hadoop0 (192.168.122.10) 56(84) bytes of data.
64 bytes from hadoop0 (192.168.122.10): icmp_seq=1 ttl=64 time=0.098 ms
64 bytes from hadoop0 (192.168.122.10): icmp_seq=2 ttl=64 time=0.055 ms
64 bytes from hadoop0 (192.168.122.10): icmp_seq=3 ttl=64 time=0.091 ms

 出现以下 即分配ip成功

  测试ssh 容器,成功

ssh hadoop0
ssh hadoop1
ssh hadoop2

 

 

 

3.8 修改 容器 hadoop0 hadoop1 hadoop2 内部hosts

本地 创建 sshhosts

[root@bogon centos-ssh-root-java-hadoop]# cat sshhosts 
 
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters

172.17.0.2      hadoop0
172.17.0.2      hadoop0.bridge
172.17.0.3      hadoop1
172.17.0.3      hadoop1.bridge
172.17.0.4      hadoop2
172.17.0.4      hadoop2.bridge

192.168.122.10 hadoop0
192.168.122.11 hadoop1
192.168.122.12 hadoop2

  

 copy 到 hadoop0,hadoop1,hadoop2

scp sshhosts  root@hadoop0:/etc/hosts
scp sshhosts  root@hadoop1/etc/hosts
scp sshhosts  root@hadoop2/etc/hosts

 

 3.9 容器内部ssh 免密钥

3.9.1 进入 hadoop0

docker exec  -it  hadoop0 bash

3.9.2 免密钥操作

在hadoop0上执行下面操作
cd  ~
mkdir .ssh
cd .ssh
ssh-keygen -t rsa(一直按回车即可)
ssh-copy-id -i localhost
ssh-copy-id -i hadoop0
ssh-copy-id -i hadoop1
ssh-copy-id -i hadoop2
在hadoop1上执行下面操作  ssh hadoop1
cd  ~
cd .ssh
ssh-keygen -t rsa(一直按回车即可)
ssh-copy-id -i localhost
ssh-copy-id -i hadoop1
在hadoop2上执行下面操作  ssh hadoop2
cd  ~
cd .ssh
ssh-keygen -t rsa(一直按回车即可)
ssh-copy-id -i localhost
ssh-copy-id -i hadoop2

 3.9.3 测试

在 hadoop0 中测试 ssh hadoop0 ,hadoop1,hadoop2

 

3.10 (重点) hadoop 配置

3.10.1 进入 hadoop 目录

cd /usr/local/hadoop/etc/hadoop/

3.10.2 修改配置文件 

3.10.2.1  vim  hadoop-env.sh

 

export JAVA_HOME=/usr/local/jdk1.7
 

 

3.10.2.2 vim core-site.xml

 

<configuration>
   <property>
                <name>fs.defaultFS</name>
                <value>hdfs://hadoop0:9000</value>
        </property>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/usr/local/hadoop/tmp</value>
        </property>
         <property>
                 <name>fs.trash.interval</name>
                 <value>1440</value>
        </property>
</configuration>
  

 

 3.10.2.3 vim hdfs-site.xml

 

<configuration>
 <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.permissions</name>
        <value>false</value>
    </property>
</configuration>
 3.10.2.4 vim yarn-site.xml

 

 

<configuration>

<!-- Site specific YARN configuration properties -->
  <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        <property> 
                <name>yarn.log-aggregation-enable</name> 
                <value>true</value> 
        </property>
</configuration>
 

 

3.10.2.5 vim mapred-site.xml

 

<configuration>
<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>
 

 

 3.10.2.6 测试 单机 伪分布式

3.10.2.6.1 进入hadoop 目录

 

cd /usr/local/hadoop
3.10.2.6.2 hdfs format 

 

 

 bin/hdfs namenode -format
 3.10.2.6.3 format 日志 

 

 

[root@hadoop0 hadoop]# cd /usr/local/hadoop
[root@hadoop0 hadoop]# bin/hdfs namenode -format
17/11/14 11:20:21 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hadoop0/172.17.0.2
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.4.1
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.1-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.4.1.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.1-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.1.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.1-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common -r 1604318; compiled by 'jenkins' on 2014-06-21T05:43Z
STARTUP_MSG:   java = 1.7.0_79
************************************************************/
17/11/14 11:20:21 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
17/11/14 11:20:21 INFO namenode.NameNode: createNameNode [-format]
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
17/11/14 11:20:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-b04e9bd9-1f09-4d72-a469-87baec5795dc
17/11/14 11:20:23 INFO namenode.FSNamesystem: fsLock is fair:true
17/11/14 11:20:23 INFO namenode.HostFileManager: read includes:
HostSet(
)
17/11/14 11:20:23 INFO namenode.HostFileManager: read excludes:
HostSet(
)
17/11/14 11:20:23 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17/11/14 11:20:23 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
17/11/14 11:20:23 INFO util.GSet: Computing capacity for map BlocksMap
17/11/14 11:20:23 INFO util.GSet: VM type       = 64-bit
17/11/14 11:20:23 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
17/11/14 11:20:23 INFO util.GSet: capacity      = 2^21 = 2097152 entries
17/11/14 11:20:23 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
17/11/14 11:20:23 INFO blockmanagement.BlockManager: defaultReplication         = 1
17/11/14 11:20:23 INFO blockmanagement.BlockManager: maxReplication             = 512
17/11/14 11:20:23 INFO blockmanagement.BlockManager: minReplication             = 1
17/11/14 11:20:23 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
17/11/14 11:20:23 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
17/11/14 11:20:23 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
17/11/14 11:20:23 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
17/11/14 11:20:23 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
17/11/14 11:20:23 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
17/11/14 11:20:23 INFO namenode.FSNamesystem: supergroup          = supergroup
17/11/14 11:20:23 INFO namenode.FSNamesystem: isPermissionEnabled = false
17/11/14 11:20:23 INFO namenode.FSNamesystem: HA Enabled: false
17/11/14 11:20:23 INFO namenode.FSNamesystem: Append Enabled: true
17/11/14 11:20:24 INFO util.GSet: Computing capacity for map INodeMap
17/11/14 11:20:24 INFO util.GSet: VM type       = 64-bit
17/11/14 11:20:24 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
17/11/14 11:20:24 INFO util.GSet: capacity      = 2^20 = 1048576 entries
17/11/14 11:20:24 INFO namenode.NameNode: Caching file names occuring more than 10 times
17/11/14 11:20:24 INFO util.GSet: Computing capacity for map cachedBlocks
17/11/14 11:20:24 INFO util.GSet: VM type       = 64-bit
17/11/14 11:20:24 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
17/11/14 11:20:24 INFO util.GSet: capacity      = 2^18 = 262144 entries
17/11/14 11:20:24 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
17/11/14 11:20:24 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17/11/14 11:20:24 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
17/11/14 11:20:24 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17/11/14 11:20:24 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
17/11/14 11:20:24 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/11/14 11:20:24 INFO util.GSet: VM type       = 64-bit
17/11/14 11:20:24 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
17/11/14 11:20:24 INFO util.GSet: capacity      = 2^15 = 32768 entries
17/11/14 11:20:24 INFO namenode.AclConfigFlag: ACLs enabled? false
17/11/14 11:20:24 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1660706305-172.17.0.2-1510658424624
17/11/14 11:20:24 INFO common.Storage: Storage directory /usr/local/hadoop/tmp/dfs/name has been successfully formatted.
17/11/14 11:20:25 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/11/14 11:20:25 INFO util.ExitUtil: Exiting with status 0
17/11/14 11:20:25 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop0/172.17.0.2
************************************************************/
 

 

3.10.2.6.3 确认成功

 倒数 Storage directory /usr/local/hadoop/tmp/dfs/name has been successfully formatted. 即成功 配置

 

3.10.2.6.4 启动伪分布式 启动

 

sbin/start-all.sh
 

 

3.10.2.6.5 启动过程中确认一次 yes

 

Are you sure you want to continue connecting (yes/no)? yes   
 

 

3.10.2.6.6 检测 启动成功

 

[root@hadoop0 hadoop]# jps
3267 SecondaryNameNode
3003 NameNode
3664 Jps
3397 ResourceManager
3090 DataNode
3487 NodeManager
 3.10.2.6.7 关闭伪分布式

 

 

sbin/stop-all.sh
 

 

 

 

 3.10.2.7  启动分布式 

 3.10.2.7.1 进入hadoop 目录

 

cd /usr/local/hadoop/etc/hadoop
 

 

 3.10.2.7.2  vi yarn-site.xml  添加

 

<property>
    <description>The hostname of the RM.</description>
    <name>yarn.resourcemanager.hostname</name>
    <value>hadoop0</value>
  </property>
 

 

3.10.2.7.3 vim slaves

 

hadoop1
hadoop2
  

 

 3.10.2.7.4 copy 配置文件到 其他hadoop1 hadoop2

 

 scp  -rq /usr/local/hadoop   hadoop1:/usr/local
 scp  -rq /usr/local/hadoop   hadoop2:/usr/local
 

 

3.10.2.7.5 启动分布式 hadoop

3.10.2.7.5.1 进入目录

 

 cd /usr/local/hadoop
 

 

3.10.2.7.5.2 hdfs format

 

bin/hdfs namenode -format -force
 

 

3.10.2.7.5.3 format 日志

 

[root@hadoop0 hadoop]# bin/hdfs namenode -format -force
17/11/16 08:32:26 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hadoop0/172.17.0.2
STARTUP_MSG:   args = [-format, -force]
STARTUP_MSG:   version = 2.7.1
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.1-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.7.1.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.1-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.1.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.1-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04Z
STARTUP_MSG:   java = 1.7.0_79
************************************************************/
17/11/16 08:32:26 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
17/11/16 08:32:26 INFO namenode.NameNode: createNameNode [-format, -force]
Formatting using clusterid: CID-d94045f1-cf92-4268-9905-df254f372280
17/11/16 08:32:27 INFO namenode.FSNamesystem: No KeyProvider found.
17/11/16 08:32:27 INFO namenode.FSNamesystem: fsLock is fair:true
17/11/16 08:32:27 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17/11/16 08:32:27 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
17/11/16 08:32:27 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
17/11/16 08:32:27 INFO blockmanagement.BlockManager: The block deletion will start around 2017 Nov 16 08:32:27
17/11/16 08:32:27 INFO util.GSet: Computing capacity for map BlocksMap
17/11/16 08:32:27 INFO util.GSet: VM type       = 64-bit
17/11/16 08:32:27 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
17/11/16 08:32:27 INFO util.GSet: capacity      = 2^21 = 2097152 entries
17/11/16 08:32:27 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
17/11/16 08:32:27 INFO blockmanagement.BlockManager: defaultReplication         = 1
17/11/16 08:32:27 INFO blockmanagement.BlockManager: maxReplication             = 512
17/11/16 08:32:27 INFO blockmanagement.BlockManager: minReplication             = 1
17/11/16 08:32:27 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
17/11/16 08:32:27 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
17/11/16 08:32:27 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
17/11/16 08:32:27 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
17/11/16 08:32:27 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
17/11/16 08:32:27 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
17/11/16 08:32:27 INFO namenode.FSNamesystem: supergroup          = supergroup
17/11/16 08:32:27 INFO namenode.FSNamesystem: isPermissionEnabled = false
17/11/16 08:32:27 INFO namenode.FSNamesystem: HA Enabled: false
17/11/16 08:32:27 INFO namenode.FSNamesystem: Append Enabled: true
17/11/16 08:32:27 INFO util.GSet: Computing capacity for map INodeMap
17/11/16 08:32:27 INFO util.GSet: VM type       = 64-bit
17/11/16 08:32:27 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
17/11/16 08:32:27 INFO util.GSet: capacity      = 2^20 = 1048576 entries
17/11/16 08:32:27 INFO namenode.FSDirectory: ACLs enabled? false
17/11/16 08:32:27 INFO namenode.FSDirectory: XAttrs enabled? true
17/11/16 08:32:27 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
17/11/16 08:32:27 INFO namenode.NameNode: Caching file names occuring more than 10 times
17/11/16 08:32:27 INFO util.GSet: Computing capacity for map cachedBlocks
17/11/16 08:32:27 INFO util.GSet: VM type       = 64-bit
17/11/16 08:32:27 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
17/11/16 08:32:27 INFO util.GSet: capacity      = 2^18 = 262144 entries
17/11/16 08:32:28 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
17/11/16 08:32:28 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17/11/16 08:32:28 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
17/11/16 08:32:28 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
17/11/16 08:32:28 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
17/11/16 08:32:28 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
17/11/16 08:32:28 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17/11/16 08:32:28 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
17/11/16 08:32:28 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/11/16 08:32:28 INFO util.GSet: VM type       = 64-bit
17/11/16 08:32:28 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
17/11/16 08:32:28 INFO util.GSet: capacity      = 2^15 = 32768 entries
Data exists in Storage Directory /usr/local/hadoop/tmp/dfs/name. Formatting anyway.
17/11/16 08:32:28 INFO namenode.FSImage: Allocated new BlockPoolId: BP-455730873-172.17.0.2-1510821148263
17/11/16 08:32:28 INFO common.Storage: Storage directory /usr/local/hadoop/tmp/dfs/name has been successfully formatted.
17/11/16 08:32:28 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/11/16 08:32:28 INFO util.ExitUtil: Exiting with status 0
17/11/16 08:32:28 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop0/172.17.0.2
************************************************************/
 

 

3.10.2.7.5.4 启动

 

sbin/start-all.sh
 3.10.2.7.5.5 启动日志

 

 

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hadoop0]
hadoop0: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-hadoop0.out
hadoop2: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-hadoop2.out
hadoop1: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-hadoop1.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
RSA key fingerprint is SHA256:pVcVMP+s49lnUdVpo99cecqZhYCrfPNSQY6XHFD/3II.
RSA key fingerprint is MD5:15:ec:c3:86:fe:b6:65:3a:dd:be:79:a0:e4:d2:f7:2e.
Are you sure you want to continue connecting (yes/no)? yes   
0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-hadoop0.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn--resourcemanager-hadoop0.out
hadoop1: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-hadoop1.out
hadoop2: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-hadoop2.out
 

 

可以看见启动了 hadoop0 namenodes ,hadoop1: starting datanode,hadoop2: starting datanode,hadoop2: starting nodemanager,hadoop1: starting nodemanager, 

 

 

 3.10.2.7.5.6  验证 hadoop0

 

[root@hadoop0 hadoop]# jps
700 SecondaryNameNode
511 NameNode
853 ResourceManager
933 Jps
 

 

3.10.2.7.5.6  验证 hadoop1

 

[root@hadoop1 /]# jps
158 NodeManager
58 DataNode
210 Jps
 

 

3.10.2.7.5.7  验证 hadoop2

 

[root@hadoop2 /]# jps
158 NodeManager
58 DataNode
210 Jps
 

 

3.10.2.7.5.8  进入 hadoop0 验证 hdfs

vim a.txt

 

baoyou 
baoyou
bao
you
hello world
hello bao you
 

 

 

 

[root@hadoop0 /]# hdfs dfs -put a.txt /
17/11/16 09:28:19 WARN hdfs.DFSClient: Slow waitForAckedSeqno took 42798ms (threshold=30000ms)
[root@hadoop0 /]# 

[root@hadoop0 /]# hdfs dfs -put a.txt /
17/11/16 09:28:19 WARN hdfs.DFSClient: Slow waitForAckedSeqno took 42798ms (threshold=30000ms)
[root@hadoop0 /]# hdfs dfs -ls /
Found 1 items
-rw-r--r--   1 root supergroup         73 2017-11-16 09:28 /a.txt
 

 

3.10.2.7.5.9  验证wordcount  

[root@hadoop0 /]# cd /usr/local/hadoop/share/hadoop/mapreduce
[root@hadoop0 mapreduce]# ls

hadoop-mapreduce-client-app-2.7.1.jar	  hadoop-mapreduce-client-hs-2.7.1.jar		     hadoop-mapreduce-client-jobclient-2.7.1.jar  lib
hadoop-mapreduce-client-common-2.7.1.jar  hadoop-mapreduce-client-hs-plugins-2.7.1.jar	     hadoop-mapreduce-client-shuffle-2.7.1.jar	  lib-examples
hadoop-mapreduce-client-core-2.7.1.jar	  hadoop-mapreduce-client-jobclient-2.7.1-tests.jar  hadoop-mapreduce-examples-2.7.1.jar	  sources 
[root@hadoop0 mapreduce]# hadoop jar hadoop-mapreduce-examples-2.7.1.jar  wordcount /a.txt /out

 

 

 cd /usr/local/hadoop/share/hadoop/mapreduce
hadoop jar hadoop-mapreduce-examples-2.7.1.jar  wordcount /a.txt /out

 

3.10.2.7.5.9  验证wordcount   mapreduce 日志

[root@hadoop0 mapreduce]# hadoop jar hadoop-mapreduce-examples-2.7.1.jar  wordcount /a.txt /out
17/11/16 10:26:19 INFO client.RMProxy: Connecting to ResourceManager at hadoop0/172.17.0.2:8032
17/11/16 10:26:25 INFO input.FileInputFormat: Total input paths to process : 1
17/11/16 10:26:30 INFO mapreduce.JobSubmitter: number of splits:1
17/11/16 10:26:31 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1510823892890_0002
17/11/16 10:26:33 INFO impl.YarnClientImpl: Submitted application application_1510823892890_0002
17/11/16 10:26:33 INFO mapreduce.Job: The url to track the job: http://hadoop0:8088/proxy/application_1510823892890_0002/
17/11/16 10:26:33 INFO mapreduce.Job: Running job: job_1510823892890_0002
17/11/16 10:38:58 INFO mapreduce.Job: Job job_1510823892890_0002 running in uber mode : false
17/11/16 10:39:12 INFO mapreduce.Job:  map 0% reduce 0%
17/11/16 11:01:31 INFO mapreduce.Job:  map 100% reduce 0%
17/11/16 11:01:40 INFO mapreduce.Job:  map 0% reduce 0%
17/11/16 11:01:40 INFO mapreduce.Job: Task Id : attempt_1510823892890_0002_m_000000_1000, Status : FAILED
AttemptID:attempt_1510823892890_0002_m_000000_1000 Timed out after 600 secs
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

17/11/16 11:03:00 INFO mapreduce.Job:  map 100% reduce 0%
17/11/16 11:03:29 INFO mapreduce.Job:  map 100% reduce 100%
17/11/16 11:03:33 INFO mapreduce.Job: Job job_1510823892890_0002 completed successfully
17/11/16 11:03:34 INFO mapreduce.Job: Counters: 51
	File System Counters
		FILE: Number of bytes read=63
		FILE: Number of bytes written=230833
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=163
		HDFS: Number of bytes written=37
		HDFS: Number of read operations=6
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=2
	Job Counters 
		Failed map tasks=1
		Launched map tasks=2
		Launched reduce tasks=1
		Other local map tasks=1
		Data-local map tasks=1
		Total time spent by all maps in occupied slots (ms)=1076284
		Total time spent by all reduces in occupied slots (ms)=25207
		Total time spent by all map tasks (ms)=1076284
		Total time spent by all reduce tasks (ms)=25207
		Total vcore-seconds taken by all map tasks=1076284
		Total vcore-seconds taken by all reduce tasks=25207
		Total megabyte-seconds taken by all map tasks=1102114816
		Total megabyte-seconds taken by all reduce tasks=25811968
	Map-Reduce Framework
		Map input records=9
		Map output records=13
		Map output bytes=125
		Map output materialized bytes=63
		Input split bytes=90
		Combine input records=13
		Combine output records=5
		Reduce input groups=5
		Reduce shuffle bytes=63
		Reduce input records=5
		Reduce output records=5
		Spilled Records=10
		Shuffled Maps =1
		Failed Shuffles=0
		Merged Map outputs=1
		GC time elapsed (ms)=784
		CPU time spent (ms)=3450
		Physical memory (bytes) snapshot=330055680
		Virtual memory (bytes) snapshot=1464528896
		Total committed heap usage (bytes)=200278016
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=73
	File Output Format Counters 
		Bytes Written=37

 

 

3.10.2.7.5.10  验证wordcount   mapreduce 结果

[root@hadoop0 mapreduce]# hdfs dfs -text /out/part-r-00000 
bao	2
baoyou	3
hello	4
world	2
you	2

 

3.10.2.7.5.11  关闭hadoop 集群

[root@hadoop0 hadoop]# sbin/stop-all.sh 
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [hadoop0]
hadoop0: stopping namenode
hadoop2: stopping datanode
hadoop1: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
hadoop1: stopping nodemanager
hadoop2: stopping nodemanager
no proxyserver to stop

 

 

 

 

 

 

 

 

 

 

 

 

 

捐助开发者 

在兴趣的驱动下,写一个免费的东西,有欣喜,也还有汗水,希望你喜欢我的作品,同时也能支持一下。 当然,有钱捧个钱场(支持支付宝和微信 以及扣扣群),没钱捧个人场,谢谢各位。

 

个人主页http://knight-black-bob.iteye.com/



 
 
 谢谢您的赞助,我会做的更好!

]]>