Kafka无法与Linux机器上的Zookeeper连接

您所在的位置:网站首页 kafka连不上生产者僵死 Kafka无法与Linux机器上的Zookeeper连接

Kafka无法与Linux机器上的Zookeeper连接

2023-12-11 15:03| 来源: 网络整理| 查看: 265

我一直在尝试在linux机器上用Kafka创建一个生产者和消费者。我使用以下命令启动了zookeeper和kafka的一个实例。

docker run -d \ --name zookeeper \ -p 32181:32181 \ -e ZOOKEEPER_CLIENT_PORT=32181 \ confluentinc/cp-zookeeper:4.1.0 docker run -d \ --name kafka \ --link zookeeper \ -p 39092:39092 \ -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:32181 \ -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:39092 \ -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \ confluentinc/cp-kafka:4.1.0

并且kafka无法与zookeeper联系。

上面的场景在Mac机器上运行良好,但在linux上就不行了。

但是,当我使用host命令启动zookeeper和kafka的实例时(如下所示)

docker run -d --name zookeeper --network=host -e ZOOKEEPER_CLIENT_PORT=32181 confluentinc/cp-zookeeper:4.1.0 docker run -d --name kafka --network=host -e KAFKA_ZOOKEEPER_CONNECT=zookeeper1:32181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:39092 -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc/cp-kafka:4.1.0

实例已启动并运行,kafka可以连接到zookeeper。

但是我不想使用主机命令。能否请大家分享一下上述方案的可能解决方案。

下面是zookeeper和kafka的完整docker日志。

docker日志kafka

# Set environment values if they exist as arguments if [ $# -ne 0 ]; then echo "===> Overriding env params with args ..." for var in "$@" do export "$var" done fi + '[' 0 -ne 0 ']' echo "===> ENV Variables ..." + echo '===> ENV Variables ...' env | sort ===> ENV Variables ... + env + sort ALLOW_UNSIGNED=false COMPONENT=kafka CONFLUENT_DEB_VERSION=1 CONFLUENT_MAJOR_VERSION=4 CONFLUENT_MINOR_VERSION=1 CONFLUENT_MVN_LABEL= CONFLUENT_PATCH_VERSION=0 CONFLUENT_PLATFORM_LABEL= CONFLUENT_VERSION=4.1.0 CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar HOME=/root HOSTNAME=df9a2616ba03 KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:39092 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 KAFKA_VERSION=1.1.0 KAFKA_ZOOKEEPER_CONNECT=zookeeper:32181 LANG=C.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin PWD=/ PYTHON_PIP_VERSION=8.1.2 PYTHON_VERSION=2.7.9-1 SCALA_VERSION=2.11 SHLVL=1 ZOOKEEPER_ENV_ALLOW_UNSIGNED=false ZOOKEEPER_ENV_COMPONENT=zookeeper ZOOKEEPER_ENV_CONFLUENT_DEB_VERSION=1 ZOOKEEPER_ENV_CONFLUENT_MAJOR_VERSION=4 ZOOKEEPER_ENV_CONFLUENT_MINOR_VERSION=1 ZOOKEEPER_ENV_CONFLUENT_MVN_LABEL= ZOOKEEPER_ENV_CONFLUENT_PATCH_VERSION=0 ZOOKEEPER_ENV_CONFLUENT_PLATFORM_LABEL= ZOOKEEPER_ENV_CONFLUENT_VERSION=4.1.0 ZOOKEEPER_ENV_CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar ZOOKEEPER_ENV_KAFKA_VERSION=1.1.0 ZOOKEEPER_ENV_LANG=C.UTF-8 ZOOKEEPER_ENV_PYTHON_PIP_VERSION=8.1.2 ZOOKEEPER_ENV_PYTHON_VERSION=2.7.9-1 ZOOKEEPER_ENV_SCALA_VERSION=2.11 ZOOKEEPER_ENV_ZOOKEEPER_CLIENT_PORT=32181 ZOOKEEPER_ENV_ZULU_OPENJDK_VERSION=8=8.17.0.3 ZOOKEEPER_NAME=/kafka/zookeeper ZOOKEEPER_PORT=tcp://172.17.0.2:2181 ZOOKEEPER_PORT_2181_TCP=tcp://172.17.0.2:2181 ZOOKEEPER_PORT_2181_TCP_ADDR=172.17.0.2 ZOOKEEPER_PORT_2181_TCP_PORT=2181 ZOOKEEPER_PORT_2181_TCP_PROTO=tcp ZOOKEEPER_PORT_2888_TCP=tcp://172.17.0.2:2888 ZOOKEEPER_PORT_2888_TCP_ADDR=172.17.0.2 ZOOKEEPER_PORT_2888_TCP_PORT=2888 ZOOKEEPER_PORT_2888_TCP_PROTO=tcp ZOOKEEPER_PORT_32181_TCP=tcp://172.17.0.2:32181 ZOOKEEPER_PORT_32181_TCP_ADDR=172.17.0.2 ZOOKEEPER_PORT_32181_TCP_PORT=32181 ZOOKEEPER_PORT_32181_TCP_PROTO=tcp ZOOKEEPER_PORT_3888_TCP=tcp://172.17.0.2:3888 ZOOKEEPER_PORT_3888_TCP_ADDR=172.17.0.2 ZOOKEEPER_PORT_3888_TCP_PORT=3888 ZOOKEEPER_PORT_3888_TCP_PROTO=tcp ZULU_OPENJDK_VERSION=8=8.17.0.3 _=/usr/bin/env echo "===> User" + echo '===> User' ===> User id + id uid=0(root) gid=0(root) groups=0(root) echo "===> Configuring ..." + echo '===> Configuring ...' /etc/confluent/docker/configure ===> Configuring ... + /etc/confluent/docker/configure dub ensure KAFKA_ZOOKEEPER_CONNECT + dub ensure KAFKA_ZOOKEEPER_CONNECT dub ensure KAFKA_ADVERTISED_LISTENERS + dub ensure KAFKA_ADVERTISED_LISTENERS # By default, LISTENERS is derived from ADVERTISED_LISTENERS by replacing # hosts with 0.0.0.0. This is good default as it ensures that the broker # process listens on all ports. if [[ -z "${KAFKA_LISTENERS-}" ]] then export KAFKA_LISTENERS KAFKA_LISTENERS=$(cub listeners "$KAFKA_ADVERTISED_LISTENERS") fi + [[ -z '' ]] + export KAFKA_LISTENERS cub listeners "$KAFKA_ADVERTISED_LISTENERS" ++ cub listeners PLAINTEXT://localhost:39092 + KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:39092 dub path /etc/kafka/ writable + dub path /etc/kafka/ writable if [[ -z "${KAFKA_LOG_DIRS-}" ]] then export KAFKA_LOG_DIRS KAFKA_LOG_DIRS="/var/lib/kafka/data" fi + [[ -z '' ]] + export KAFKA_LOG_DIRS + KAFKA_LOG_DIRS=/var/lib/kafka/data # advertised.host, advertised.port, host and port are deprecated. Exit if these properties are set. if [[ -n "${KAFKA_ADVERTISED_PORT-}" ]] then echo "advertised.port is deprecated. Please use KAFKA_ADVERTISED_LISTENERS instead." exit 1 fi + [[ -n '' ]] if [[ -n "${KAFKA_ADVERTISED_HOST-}" ]] then echo "advertised.host is deprecated. Please use KAFKA_ADVERTISED_LISTENERS instead." exit 1 fi + [[ -n '' ]] if [[ -n "${KAFKA_HOST-}" ]] then echo "host is deprecated. Please use KAFKA_ADVERTISED_LISTENERS instead." exit 1 fi + [[ -n '' ]] if [[ -n "${KAFKA_PORT-}" ]] then echo "port is deprecated. Please use KAFKA_ADVERTISED_LISTENERS instead." exit 1 fi + [[ -n '' ]] # Set if ADVERTISED_LISTENERS has SSL:// or SASL_SSL:// endpoints. if [[ $KAFKA_ADVERTISED_LISTENERS == *"SSL://"* ]] then echo "SSL is enabled." dub ensure KAFKA_SSL_KEYSTORE_FILENAME export KAFKA_SSL_KEYSTORE_LOCATION="/etc/kafka/secrets/$KAFKA_SSL_KEYSTORE_FILENAME" dub path "$KAFKA_SSL_KEYSTORE_LOCATION" exists dub ensure KAFKA_SSL_KEY_CREDENTIALS KAFKA_SSL_KEY_CREDENTIALS_LOCATION="/etc/kafka/secrets/$KAFKA_SSL_KEY_CREDENTIALS" dub path "$KAFKA_SSL_KEY_CREDENTIALS_LOCATION" exists export KAFKA_SSL_KEY_PASSWORD KAFKA_SSL_KEY_PASSWORD=$(cat "$KAFKA_SSL_KEY_CREDENTIALS_LOCATION") dub ensure KAFKA_SSL_KEYSTORE_CREDENTIALS KAFKA_SSL_KEYSTORE_CREDENTIALS_LOCATION="/etc/kafka/secrets/$KAFKA_SSL_KEYSTORE_CREDENTIALS" dub path "$KAFKA_SSL_KEYSTORE_CREDENTIALS_LOCATION" exists export KAFKA_SSL_KEYSTORE_PASSWORD KAFKA_SSL_KEYSTORE_PASSWORD=$(cat "$KAFKA_SSL_KEYSTORE_CREDENTIALS_LOCATION") if [[ -n "${KAFKA_SSL_CLIENT_AUTH-}" ]] && ( [[ $KAFKA_SSL_CLIENT_AUTH == *"required"* ]] || [[ $KAFKA_SSL_CLIENT_AUTH == *"requested"* ]] ) then dub ensure KAFKA_SSL_TRUSTSTORE_FILENAME export KAFKA_SSL_TRUSTSTORE_LOCATION="/etc/kafka/secrets/$KAFKA_SSL_TRUSTSTORE_FILENAME" dub path "$KAFKA_SSL_TRUSTSTORE_LOCATION" exists dub ensure KAFKA_SSL_TRUSTSTORE_CREDENTIALS KAFKA_SSL_TRUSTSTORE_CREDENTIALS_LOCATION="/etc/kafka/secrets/$KAFKA_SSL_TRUSTSTORE_CREDENTIALS" dub path "$KAFKA_SSL_TRUSTSTORE_CREDENTIALS_LOCATION" exists export KAFKA_SSL_TRUSTSTORE_PASSWORD KAFKA_SSL_TRUSTSTORE_PASSWORD=$(cat "$KAFKA_SSL_TRUSTSTORE_CREDENTIALS_LOCATION") fi fi + [[ PLAINTEXT://localhost:39092 == *\S\S\L\:\/\/* ]] # Set if KAFKA_ADVERTISED_LISTENERS has SASL_PLAINTEXT:// or SASL_SSL:// endpoints. if [[ $KAFKA_ADVERTISED_LISTENERS =~ .*SASL_.*://.* ]] then echo "SASL" is enabled. dub ensure KAFKA_OPTS if [[ ! $KAFKA_OPTS == *"java.security.auth.login.config"* ]] then echo "KAFKA_OPTS should contain 'java.security.auth.login.config' property." fi fi + [[ PLAINTEXT://localhost:39092 =~ .*SASL_.*://.* ]] if [[ -n "${KAFKA_JMX_OPTS-}" ]] then if [[ ! $KAFKA_JMX_OPTS == *"com.sun.management.jmxremote.rmi.port"* ]] then echo "KAFKA_OPTS should contain 'com.sun.management.jmxremote.rmi.port' property. It is required for accessing the JMX metrics externally." fi fi + [[ -n '' ]] dub template "/etc/confluent/docker/${COMPONENT}.properties.template" "/etc/${COMPONENT}/${COMPONENT}.properties" + dub template /etc/confluent/docker/kafka.properties.template /etc/kafka/kafka.properties dub template "/etc/confluent/docker/log4j.properties.template" "/etc/${COMPONENT}/log4j.properties" + dub template /etc/confluent/docker/log4j.properties.template /etc/kafka/log4j.properties dub template "/etc/confluent/docker/tools-log4j.properties.template" "/etc/${COMPONENT}/tools-log4j.properties" + dub template /etc/confluent/docker/tools-log4j.properties.template /etc/kafka/tools-log4j.properties echo "===> Running preflight checks ... " + echo '===> Running preflight checks ... ' /etc/confluent/docker/ensure + /etc/confluent/docker/ensure ===> Running preflight checks ... ===> Check if /var/lib/kafka/data is writable ... export KAFKA_DATA_DIRS=${KAFKA_DATA_DIRS:-"/var/lib/kafka/data"} + export KAFKA_DATA_DIRS=/var/lib/kafka/data + KAFKA_DATA_DIRS=/var/lib/kafka/data echo "===> Check if $KAFKA_DATA_DIRS is writable ..." + echo '===> Check if /var/lib/kafka/data is writable ...' dub path "$KAFKA_DATA_DIRS" writable + dub path /var/lib/kafka/data writable ===> Check if Zookeeper is healthy ... echo "===> Check if Zookeeper is healthy ..." + echo '===> Check if Zookeeper is healthy ...' cub zk-ready "$KAFKA_ZOOKEEPER_CONNECT" "${KAFKA_CUB_ZK_TIMEOUT:-40}" + cub zk-ready zookeeper:32181 40 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=df9a2616ba03 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_102 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Azul Systems, Inc. [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler= [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.15.0-46-generic [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/ [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper:32181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@1ddc4ec2 [main-SendThread(zookeeper:32181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.17.0.2:32181. Will not attempt to authenticate using SASL (unknown error)

使用以下命令

docker run -d \ --name zookeeper \ -p 32181:32181 \ -e ZOOKEEPER_CLIENT_PORT=32181 \ confluentinc/cp-zookeeper:4.1.0 docker run -d \ --name kafka \ --link zookeeper \ -p 39092:39092 \ -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:32181 \ -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:39092 \ -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \ confluentinc/cp-kafka:4.1.0

kafka应该与zookeeper连接,因为它在Mac机器上工作,而不是在linux上。



【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3