跳至内容
- 安装成功的教程链接
- (182条消息) HDP上安装impala_weixin_33834137的博客-CSDN博客
- (182条消息) impala安装部署(绝对详细!)_大数据梦想家的博客-CSDN博客_impala安装
- ln -sf /usr/hdp/3.1.4.0-315/hadoop/hadoop-common.jar /usr/lib/impala/lib/hadoop-common.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop/hadoop-annotations.jar /usr/lib/impala/lib/hadoop-annotations.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop/hadoop-auth.jar /usr/lib/impala/lib/hadoop-auth.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-mapreduce/hadoop-aws.jar /usr/lib/impala/lib/hadoop-aws.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-hdfs/hadoop-hdfs.jar /usr/lib/impala/lib/hadoop-hdfs.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-mapreduce/hadoop-mapreduce-client-common.jar /usr/lib/impala/lib/hadoop-mapreduce-client-common.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-mapreduce/hadoop-mapreduce-client-core.jar /usr/lib/impala/lib/hadoop-mapreduce-client-core.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-mapreduce/hadoop-mapreduce-client-jobclient.jar /usr/lib/impala/lib/hadoop-mapreduce-client-jobclient.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-mapreduce/hadoop-mapreduce-client-shuffle.jar /usr/lib/impala/lib/hadoop-mapreduce-client-shuffle.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-yarn/hadoop-yarn-api.jar /usr/lib/impala/lib/hadoop-yarn-api.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-yarn/hadoop-yarn-client.jar /usr/lib/impala/lib/hadoop-yarn-client.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-yarn/hadoop-yarn-common.jar /usr/lib/impala/lib/hadoop-yarn-common.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice.jar /usr/lib/impala/lib/hadoop-yarn-server-applicationhistoryservice.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-yarn/hadoop-yarn-server-common.jar /usr/lib/impala/lib/hadoop-yarn-server-common.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-yarn/hadoop-yarn-server-nodemanager.jar /usr/lib/impala/lib/hadoop-yarn-server-nodemanager.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-yarn/hadoop-yarn-server-resourcemanager.jar /usr/lib/impala/lib/hadoop-yarn-server-resourcemanager.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-yarn/hadoop-yarn-server-web-proxy.jar /usr/lib/impala/lib/hadoop-yarn-server-web-proxy.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop/lib/native/libhadoop.so /usr/lib/impala/lib/libhadoop.so
- ln -sf /usr/hdp/3.1.4.0-315/hadoop/lib/native/libhadoop.so.1.0.0 /usr/lib/impala/lib/libhadoop.so.1.0.0
- ln -sf /usr/hdp/3.1.4.0-315/usr/lib/libhdfs.so /usr/lib/impala/lib/libhdfs.so
- ln -sf /usr/hdp/3.1.4.0-315/usr/lib/libhdfs.so.0.0.0 /usr/lib/impala/lib/libhdfs.so.0.0.0
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-beeline.jar /usr/lib/impala/lib/hive-beeline.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-common.jar /usr/lib/impala/lib/hive-common.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-exec.jar /usr/lib/impala/lib/hive-exec.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-hbase-handler.jar /usr/lib/impala/lib/hive-hbase-handler.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-metastore.jar /usr/lib/impala/lib/hive-metastore.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-serde.jar /usr/lib/impala/lib/hive-serde.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-service.jar /usr/lib/impala/lib/hive-service.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-shims-common.jar /usr/lib/impala/lib/hive-shims-common.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-shims-scheduler.jar /usr/lib/impala/lib/hive-shims-scheduler.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-shims.jar /usr/lib/impala/lib/hive-shims.jar
- ln -sf /usr/hdp/3.1.4.0-315/zookeeper/zookeeper.jar /usr/lib/impala/lib/zookeeper.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-ant.jar /usr/lib/impala/lib/hive-ant.jar ?????
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/ant-1.9.1.jar /usr/lib/impala/lib/hive-ant.jar #########
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-mapreduce/hadoop-archives.jar /usr/lib/impala/lib/hadoop-archives.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop/hadoop-azure-datalake.jar /usr/lib/impala/lib/hadoop-azure-datalake.jar
- ln -sf /usr/hdp/3.1.4.0-315/hbase/lib/hbase-annotations.jar /usr/lib/impala/lib/hbase-annotations.jar
- ln -sf /usr/hdp/3.1.4.0-315/hbase/lib/hbase-client.jar /usr/lib/impala/lib/hbase-client.jar
- ln -sf /usr/hdp/3.1.4.0-315/hbase/lib/hbase-common.jar /usr/lib/impala/lib/hbase-common.jar
- ln -sf /usr/hdp/3.1.4.0-315/hbase/lib/hbase-protocol.jar /usr/lib/impala/lib/hbase-protocol.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-classification.jar /usr/lib/impala/lib/hive-classification.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-cli.jar /usr/lib/impala/lib/hive-cli.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-hcatalog-core.jar /usr/lib/impala/lib/hive-hcatalog-core.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-hcatalog-server-extensions.jar /usr/lib/impala/lib/hive-hcatalog-server-extensions.jar
- 补充因为jar包报错而更新的jar包软链接
- ln -sf /usr/lib/impala/lib/hbase-client-1.2.6.jar /usr/lib/impala/lib/hbase-client.jar
- ln -sf /usr/lib/impala/lib/hbase-common-1.3.0.jar /usr/lib/impala/lib/hbase-common.jar
- ln -sf /usr/lib/impala/lib/hive-common-1.2.0.jar /usr/lib/impala/lib/hive-common.jar
- ln -sf /usr/lib/impala/lib/hive-exec-1.2.0.jar /usr/lib/impala/lib/hive-exec.jar
- ln -sf /usr/lib/impala/lib/hive-metastore-1.2.0.jar /usr/lib/impala/lib/hive-metastore.jar
- ln -sf /usr/lib/impala/lib/hive-service-1.2.0.jar /usr/lib/impala/lib/hive-service.jar
- ln -sf /usr/lib/impala/lib/hadoop-auth-2.7.2.jar /usr/lib/impala/lib/hadoop-auth.jar
- ln -sf /usr/lib/impala/lib/hadoop-common-2.7.2.jar /usr/lib/impala/lib/hadoop-common.jar
- ln -sf /usr/lib/impala/lib/hadoop-hdfs-2.7.2.jar /usr/lib/impala/lib/hadoop-hdfs.jar
- ln -sf /usr/lib/impala/lib/hadoop-mapreduce-client-core-2.7.2.jar /usr/lib/impala/lib/hadoop-mapreduce-client-core.jar
- 第一个问题:安装 本地 yum源
- 参考伟哥的hdp搭建文档 四、安装 hdp3.1.4
- 4.1 配置ambari所需安装源
- 注:配置第一个节点,完成后copy到其它节点
- 安装httpd
- yum install -y httpd
- service httpd start
- chkconfig httpd on
- cd /var/www/html/
- 1. ambari 的 yum源
- cd /hujun
- unzip ambar-2.7.4.0.zip
- mv ambari /var/www/html/
- cd /var/www/html/ambari/centos7/2.7.4.0-118
- cp -p ambari.repo /etc/yum.repos.d/
- cd /etc/yum.repos.d
- vim ambari.repo
- —-
- #VERSION_NUMBER=2.7.4.0-118
- [ambari-2.7.4.0]
- #json.url = http://public-repo-1.hortonworks.com/HDP/hdp_urlinfo.json
- name=ambari Version – ambari-2.7.4.0
- baseurl=http://172.26.146.50/ambari/centos7/2.7.4.0-118
- gpgcheck=0
- enabled=1
- priority=1
- —-
- 结果图:

- impala安装节点划分
- 主节点xxx impala-server(impalad) impala-catalog impala-state-store
- 从节点xxx impala-server(impalad)
- 从节点xxx impala-server(impalad)
- 问题二:xxx 上的根目录磁盘空间已满,故删除了root目录下的datax的压缩包,空出2.5个G内存
- 问题三:Public key for avro-libs-1.7.6+cdh5.14.0+137-1.cdh5.14.0.p0.47.el7.noarch.rpm is not installed 56和57这两个impala服务的从节点均报此错误
- 配置hadoop,hdfs,hive相关配置文件
- cp -r /export/servers/hadoop-2.7.5/etc/hadoop/core-site.xml /etc/impala/conf/core-site.xml
- cp -r /export/servers/hadoop-2.7.5/etc/hadoop/hdfs-site.xml /etc/impala/conf/hdfs-site.xml
- cp -r /export/servers/hive/conf/hive-site.xml /etc/impala/conf/hive-site.xml
- 修改的配置文件地址及修改信息
- 1.hdfs-site.xml
- /usr/hdp/3.1.4.0-315/hadoop/etc/hadoop/hdfs-site.xml
- 新增了框内数据

- 2.hive-site.xml
- /usr/hdp/3.1.4.0-315/hive/conf/hive-site.xml
- 未做任何修改,因为已经有hive.metastore.uris默认配置了,默认为是hadoop002:9083
- 注:这表示 启动hive.metastore服务时需要在hadoop002服务器上启动
- 3.core-site.xml
- /usr/hdp/3.1.4.0-315/hadoop/conf
- 4.hdfs-site.xml
- /usr/hdp/3.1.4.0-315/hadoop/conf/hdfs-site.xml
- 新增框内数据

- 明日操作 :重启集群,重启impala服务,验证修改的配置是否有用
- 在hadoop002上启动hive,metastore
- hive –service hiveserver2 &
- hive –service metastore &
- 在hadoop001上启动
- service impala-state-store start
- service impala-catalog start
- service impala-server start
- 在hadoop002和hadoop003启动
- service impala-server start
- 问题:IMPALA还是报not connected错误
- 解决方案:
- 1.直接找not connected的解决方法
- 2.ambari重新 添加impala服务
- ambari直接添加impala服务
- /var/www/html/ambari-impala-service
- /var/lib/ambari-server/resources/stacks/HDP/3.1/services/IMPALA
- 2022年4月19日 ambari hdp上直接安装impala方案排查问题操作记录
- 1.:
- 原本的value值是hive,改成了root

- 原本是false,改成了true

- 原本是1800s改成了3600s
- /var/lib/hadoop-hdfs/dn_socket 这个文件夹是干嘛的
- 2022年4月20日 安装impala启动impala遇到的问题
- 1.impala – catalog启动失败
- 1.删除这个pid文件,尝试启动catalog
- /run/impala/catalogd-impala.pid pid文件存储位置找到了
- /run/lock/subsys log文件
- 2.通过日志文件来调试
- 3.按达哥说的找相关教程尝试修改网页端配置
- 4.比对网页端配置修改前的文件是否我所修改的内容已经生效
- 重装impala的修改记录
- 1.vim/etc/impala/conf/core-site.xml
- 2.hive-site.xml
- 修改成hadoop002:3306/hive
- i

- 原本是hive

- 改成true
- 2022年4月21日 周四 catalog Bug排查计划
- 1.查看impala启动日志找到报错原因
- 2.修改impala /etc/default/impala 默认文件,昨天晚上地铁上搜到的修改catalog 配置项
- 3.根据安装文件一一检查配置文件,检查ambari配置文件
- 4.重启ambari-server
- 5.先启动hive.statestore,hiveserver2
- 6.追加Hadoop2.6安装包
- 6.卸载2.6版impala,重装3.1impala
- 卸载55主机上2.6版本impala报错如下,是否软连接制作的时候出现问题,不得删除任何jar包和软连接
- 给impala/lib/新增 hbase-protocol-shaded.jar 软连接
- 尝试修改core-site.xml等配置文件来解决这个问题

- 进入impala官网查看支持的hadoop版本
- 3.1版本的impala安装过程中遇到的问题记录
- 1.修改hdfs-site.xml遇到的问题
- 1.这个路径是空的,需要找一下到底在哪里,最好改过来(既然hive可以正常运行,那就不用改了,使用源文件默认的路径 即可)

- 2.

- 2.这里的hdfs://hadoop001:9000 啥意思,入果报错的话可以尝试修改成另外两个节点试试,监控一下在使用9000端口 的服务是啥(56上在监听9000端口i)
- 报错:
- 这里要参照ambari界面上的配置信息
- 设置fs.default.FS的value值为hdfs://hadoop003:8020
- 修改hdfs-site.xml的dfs.domain.socket.path路径为专为impala新建的路径
- 目前的方案:
- 1.修改ambari网页端的配置项
- 2.重启ambari-server,手动启动hive-metastore,手动启动hive-server2,重启impala-server ,impala-catalog
- 3.找寻impala3.2的state-store的rpm包
- jersey-server-1.19.jar这是/usr/lib/impala/lib的重复的较低版本的包
- 2022年4月22日 周五 今日方案
- 1.四处查询impala-state-store-3.2.0+cdh6.3.4的安装包
- 2.尝试直接安装apache-impala-3.2.0的tar.gz压缩包
- 3.记录下impala3.2版本的配置信息
- 4.网上查找较高版本的全套impala安装包
- 5.重装2.6版本
- 重装2.12版本
- 报错1:
- 解决方案
- rpm -ivh –replacefiles impala-udf-devel-2.12.0+cdh5.16.2+0-1.cdh5.16.2.p0.22.el7.x86_64.rpm
- 报错2:

- 报错3:


- 报错4:缺少hbase包
- 2022年4月23日 impala lib 中的hbase jar包记录
- 对比/usr/hdp/3.1.4.0-315/hbase/lib 中的jar包,看还缺什么jar包。把缺的补上,然后查看日志,查看是否为缺失的jar包,不缺就升级相关的jar包试试

- 1.缺失jar包记录:
- 2.55重装impala2.12看看是否自带hbase jar包
- 该问题已解决:
- 把网上找到的如下几个包丢到impala lib中后重新更新了hbase包的软连接路径不再报hbase包错误,

- 更新上述几个包的软连接
- ln -sf /usr/lib/impala/lib/hbase-client-1.2.6.jar /usr/lib/impala/lib/hbase-client.jar
- ln -sf /usr/lib/impala/lib/hbase-common-1.3.0.jar /usr/lib/impala/lib/hbase-common.jar
- ln -sf /usr/lib/impala/lib/hive-common-1.2.0.jar /usr/lib/impala/lib/hive-common.jar
- ln -sf /usr/lib/impala/lib/hive-exec-1.2.0.jar /usr/lib/impala/lib/hive-exec.jar
- ln -sf /usr/lib/impala/lib/hive-metastore-1.2.0.jar /usr/lib/impala/lib/hive-metastore.jar
- ln -sf /usr/lib/impala/lib/hive-service-1.2.0.jar /usr/lib/impala/lib/hive-service.jar
- ln -sf /usr/lib/impala/lib/hadoop-auth-2.7.2.jar /usr/lib/impala/lib/hadoop-auth.jar
- ln -sf /usr/lib/impala/lib/hadoop-common-2.7.2.jar /usr/lib/impala/lib/hadoop-common.jar
- ln -sf /usr/lib/impala/lib/hadoop-hdfs-2.7.2.jar /usr/lib/impala/lib/hadoop-hdfs.jar
- ln -sf /usr/lib/impala/lib/hadoop-mapreduce-client-core-2.7.2.jar /usr/lib/impala/lib/hadoop-mapreduce-client-core.jar
- 现在的报错如下:(impala-server/impala-catalog均报如下错误。impala-state-store运行正常)应当查找更换sentry的包,先把上述步骤先复制到56.57主机sentry-provider-cache.jar 主要是这个jar包的问题
- 56主节点主机导入上述几个包之前已正常运行
- 解决方案
- 1.把impala lib中已有的sentry相关的jar包制作软链接试试,说不定是路径被集群写死了导致的,jar包只有在不带版本号的时候可以被正常识别到(纯属个人猜测,可以尝试一下)
- 注:发现sentry lib下的jar包的软链接都失效了,修改一下对应的软链接
- 已修改
- ln -sf /usr/hdp/3.1.4.0-315/hadoop/hadoop-annotations.jar /usr/lib/sentry/lib/hadoop-annotations.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-mapreduce/hadoop-archives.jar /usr/lib/sentry/lib/hadoop-archives.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop/hadoop-auth.jar /usr/lib/sentry/lib/hadoop-auth.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-hdfs/hadoop-hdfs.jar /usr/lib/sentry/lib/hadoop-hdfs.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-mapreduce/hadoop-mapreduce-client-common.jar /usr/lib/sentry/lib/hadoop-mapreduce-client-common.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-mapreduce/hadoop-mapreduce-client-core.jar /usr/lib/sentry/lib/hadoop-mapreduce-client-core.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-mapreduce/hadoop-mapreduce-client-jobclient.jar /usr/lib/sentry/lib/hadoop-mapreduce-client-jobclient.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-mapreduce/hadoop-mapreduce-client-shuffle.jar /usr/lib/sentry/lib/hadoop-mapreduce-client-shuffle.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-yarn/hadoop-yarn-api.jar /usr/lib/sentry/lib/hadoop-yarn-api.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-yarn/hadoop-yarn-client.jar /usr/lib/sentry/lib/hadoop-yarn-client.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-yarn/hadoop-yarn-common.jar /usr/lib/sentry/lib/hadoop-yarn-common.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice.jar /usr/lib/sentry/lib/hadoop-yarn-server-applicationhistoryservice.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-yarn/hadoop-yarn-server-common.jar /usr/lib/sentry/lib/hadoop-yarn-server-common.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-yarn/hadoop-yarn-server-nodemanager.jar /usr/lib/sentry/lib/hadoop-yarn-server-nodemanager.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-yarn/hadoop-yarn-server-resourcemanager.jar /usr/lib/sentry/lib/hadoop-yarn-server-resourcemanager.jar
- ln -sf /usr/hdp/3.1.4.0-315/hadoop-yarn/hadoop-yarn-server-web-proxy.jar /usr/lib/sentry/lib/hadoop-yarn-server-web-proxy.jar
- ln -sf /usr/hdp/3.1.4.0-315/hbase/lib/hbase-annotations.jar /usr/lib/sentry/lib/hbase-annotations.jar
- ln -sf /usr/hdp/3.1.4.0-315/hbase/lib/hbase-common.jar /usr/lib/sentry/lib/hbase-common.jar
- ln -sf /usr/hdp/3.1.4.0-315/hbase/lib/hbase-protocol.jar /usr/lib/sentry/lib/hbase-protocol.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/ant-1.9.1.jar /usr/lib/sentry/lib/hive-ant.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-classification.jar /usr/lib/sentry/lib/hive-classification.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-cli.jar /usr/lib/sentry/lib/hive-cli.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-common.jar /usr/lib/sentry/lib/hive-common.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-exec.jar /usr/lib/sentry/lib/hive-exec.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-hcatalog-core.jar /usr/lib/sentry/lib/hive-hcatalog-core.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-hcatalog-server-extensions.jar /usr/lib/sentry/lib/hive-hcatalog-server-extensions.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-metastore.jar /usr/lib/sentry/lib/hive-metastore.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-serde.jar /usr/lib/sentry/lib/hive-serde.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-shims-0.23-3.1.0.3.1.4.0-315.jar /usr/lib/sentry/lib/hive-shims-0.23.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-shims-common.jar /usr/lib/sentry/lib/hive-shims-common.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-shims.jar /usr/lib/sentry/lib/hive-shims.jar
- ln -sf /usr/hdp/3.1.4.0-315/hive/lib/hive-shims-scheduler.jar /usr/lib/sentry/lib/hive-shims-scheduler.jar
- ln -sf /usr/hdp/3.1.4.0-315/zookeeper/zookeeper.jar /usr/lib/sentry/lib/zookeeper.jar
- 2.找寻sentry相关的jar包来进行替换
- ln -fs /usr/lib/sentry/lib/sentry-provider-cache-1.5.1-cdh5.16.2.jar /usr/lib/sentry/lib/sentry-provider-cache.jar
- 更换记录:
- 1.将原装的sentry-provider-cache-1.5.1-cdh5.14.0.jar换成sentry-provider-cache-2.1.0.jar依旧报错
- 2.将原装的sentry-provider-cache-1.5.1-cdh5.14.0.jar换成sentry-provider-cache-2.0.1.jar依旧报错
- 3.将原装的sentry-provider-cache-1.5.1-cdh5.14.0.jar换成sentry-provider-cache-2.0.0.jar依旧报错
- 4.将原装的sentry-provider-cache-1.5.1-cdh5.14.0.jar换成sentry-provider-cache-1.8.0.jar依旧报错
- 5.将原装的sentry-provider-cache-1.5.1-cdh5.14.0.jar换成sentry-provider-cache-1.7.1.jar依旧报错
- 6.将原装的sentry-provider-cache-1.5.1-cdh5.14.0.jar换成sentry-provider-cache-1.7.0.jar依旧报错
- 下一步解决方案:
- sentry-provider-cache-1.5.1-cdh5.14.0.jar换成sentry-provider-cache-1.5.1-cdh5.16.2.jar 成功
- 排查完jar包报错之后,出现 新的报错:
- dfs.domain.socket.path
- /var/lib/hadoop-hdfs/dn_socket hdfs默认配置,root账户进不去
- /var/run/hdfs-sockets/dn 我自己建的文件夹,dn是文件夹!,试一下修改hdfs-site.xml的dfs.domain-socke-path这个配置看看能否成功 成功了!
- 安装成功!
- 下一步:测试impala-shell功能!
- hive-mmetastore的方法报错,所以删掉了56主机上的hive-metastore-1.2.0,把软连接指向了hive自带的hive-metastore

- 经验 总结:设置 配置文件时:要参考ambari网页来设置