Kafka-Eagle监控 Kafka-Eagle框架可以监控 Kafka 集群的整体运行情况,在生产环境中经常使用。
在此之前监控工具需要MySQL作为持久化手段。
一、Kafka环境准备 1、关闭 Kafka 集群 2、修改/opt/module/kafka/bin/kafka-server-start.sh 1 vim bin/kafka-server-start.sh
修改如下参数值:
1 2 3 if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G" fi
为
1 2 3 4 5 if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then export KAFKA_HEAP_OPTS="-server -Xms2G -Xmx2G -XX:PermSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:ParallelGCThreads=8 -XX:ConcGCThreads=5 -XX:InitiatingHeapOccupancyPercent=70" export JMX_PORT="9999" # export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G" fi
注意 :修改之后在启动 Kafka 之前要分发之其他节点
1 xsync kafka-server-start.sh
二、Kafka-Eagle 安装 官方网址
下载完,改名解压,修改配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 vim system-config.properties # # multi zookeeper & kafka cluster list # Settings prefixed with 'kafka.eagle.' will be deprecated, use 'efak.' instead # efak.zk.cluster.alias=cluster1 cluster1.zk.list=hadoop102:2181,hadoop103:2181,hadoop104:2181/kafka # # zookeeper enable acl # cluster1.zk.acl.enable=false cluster1.zk.acl.schema=digest cluster1.zk.acl.username=test cluster1.zk.acl.password=test123 # # broker size online list # cluster1.efak.broker.size=20 # # zk client thread limit # kafka.zk.limit.size=32 # # EFAK webui port # efak.webui.port=8048 # # kafka jmx acl and ssl authenticate # cluster1.efak.jmx.acl=false cluster1.efak.jmx.user=keadmin cluster1.efak.jmx.password=keadmin123 cluster1.efak.jmx.ssl=false cluster1.efak.jmx.truststore.location=/data/ssl/certificates/kafka.truststore cluster1.efak.jmx.truststore.password=ke123456 # # kafka offset storage # # offset 保存在 kafka cluster1.efak.offset.storage=kafka # # kafka jmx uri # cluster1.efak.jmx.uri=service:jmx:rmi:///jndi/rmi://%s/jmxrmi # # kafka metrics, 15 days by default # efak.metrics.charts=true efak.metrics.retain=15 # # kafka sql topic records max # efak.sql.topic.records.max=5000 efak.sql.topic.preview.records.max=10 # # delete kafka topic token # efak.topic.token=keadmin # # kafka sasl authenticate # cluster1.efak.sasl.enable=false cluster1.efak.sasl.protocol=SASL_PLAINTEXT cluster1.efak.sasl.mechanism=SCRAM-SHA-256 cluster1.efak.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramL oginModule required username="kafka" password="kafka-eagle"; cluster1.efak.sasl.client.id= cluster1.efak.blacklist.topics= cluster1.efak.sasl.cgroup.enable=false cluster1.efak.sasl.cgroup.topics= cluster2.efak.sasl.enable=false cluster2.efak.sasl.protocol=SASL_PLAINTEXT cluster2.efak.sasl.mechanism=PLAIN cluster2.efak.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainL oginModule required username="kafka" password="kafka-eagle"; cluster2.efak.sasl.client.id= cluster2.efak.blacklist.topics= cluster2.efak.sasl.cgroup.enable=false cluster2.efak.sasl.cgroup.topics= # # kafka ssl authenticate # cluster3.efak.ssl.enable=false cluster3.efak.ssl.protocol=SSL cluster3.efak.ssl.truststore.location= cluster3.efak.ssl.truststore.password= cluster3.efak.ssl.keystore.location= cluster3.efak.ssl.keystore.password= cluster3.efak.ssl.key.password= cluster3.efak.ssl.endpoint.identification.algorithm=https cluster3.efak.blacklist.topics= cluster3.efak.ssl.cgroup.enable=false cluster3.efak.ssl.cgroup.topics= # # kafka sqlite jdbc driver address # # 配置 mysql 连接 efak.driver=com.mysql.jdbc.Driver efak.url=jdbc:mysql://hadoop102:3306/ke?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull efak.username=root efak.password=000000 # # kafka mysql jdbc driver address # # efak.driver=com.mysql.cj.jdbc.Driver # efak.url=jdbc:mysql://127.0.0.1:3306/ke?useUnicode=true &characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull # efak.username=root # efak.password=123456
添加环境变量 :
1 2 3 4 5 sudo vim /etc/profile.d/my_env.sh # kafkaEFAK export KE_HOME=/opt/module/efak export PATH=$PATH:$KE_HOME/bin
注意 :source /etc/profile
启动 :
三、Kafka-Eagle 页面 操作 登录页面查看监控数据 :
1 http://192.168.10.102:8048/
Kafka-Kraft 模式 一、Kafka-Kraft 架构 二、Kafka-Kraft 集群部署 三、Kafka-Kraft 集群启动停止脚本 在/home/atguigu/bin 目录下创建文件 kf2.sh 脚本文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 vim kf2.sh # 脚本如下 # ! /bin/bash case $1 in "start"){ for i in hadoop102 hadoop103 hadoop104 do echo " --------启动 $i Kafka2-------" ssh $i "/opt/module/kafka2/bin/kafka-server-start.sh -daemon /opt/module/kafka2/config/kraft/server.properties" done };; "stop"){ for i in hadoop102 hadoop103 hadoop104 do echo " --------停止 $i Kafka2-------" ssh $i "/opt/module/kafka2/bin/kafka-server-stop.sh " done };; esac
添加执行权限 :
启动集群命令 :
停止集群命令 :