问题描述
我已经在一台机器上运行的一组容器中启动了用于多代理配置的所有组件.我使用了 https:中找到的shell脚本://archive.apache.org/dist/kafka/2.0.0/kafka_2.11-2.0.0.tgz
I have started all the components for a multi-broker configuration in a cluster of containers running in a single machine. I have used the shell scripts found in https://archive.apache.org/dist/kafka/2.0.0/kafka_2.11-2.0.0.tgz
- 使用zookeeper-properties启动Zookeeper
- 启动了具有3个不同服务器属性的3个代理.它们仅在这些配置值上有所区别
broker.id
log.dirs
port
我也曾尝试更改 offsets.topic.replication.factor
和 transaction.state.log.replication.factor
,但是我不认为它们是相关的
I have also tried to change offsets.topic.replication.factor
and transaction.state.log.replication.factor
, but I don't believe they are relevant.
注意:我启动经纪人的顺序是0,1,2
Note: the order how I started the brokers is 0 , 1 , 2
- 创建一个具有复制因子3和一个分区的主题
bin/kafka-topics.sh --create --topic repl_topic --zookeeper localhost:2181 --replication-factor 3 --partitions 1
- 开始成为控制台的生产者和消费者
bin/kafka-console-producer.sh --topic repl_topic --broker-list localhost:9092,localhost:9093,localhost:9094
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092,localhost:9093,localhost:9094 -topic repl_topic --from-beginning
生产者和消费者似乎正常工作.但是,如果我通过Ctrl-C关闭代理0(即使它不是领导者),则使用者将收到警告,但不再收到来自生产者的消息.仅当代理0重新启动时,消费者才会收到所有消息.
Producer and consumer appear to work correctly. However if I shutdown by Ctrl-C the broker 0 (even if it is not leader), the consumer receives a warning but it doesn't receive anymore message from the producer. Only when broker 0 will be up again, the consumer will receive all the messages.
使用者仅依赖于代理0.它不会与其他人互动.
The consumer is dependent on broker 0 only. It doesn't interact with the others.
为什么?
推荐答案
我终于找到了问题,并已解决.检查__consumer_offsets,我发现它没有被复制.
I have finally figured out the problem and I have fixed that.Checking the __consumer_offsets, I have noticed it is not replicated.
bin/kafka-topics.sh --topic __consumer_offsets --zookeeper localhost:2181 --describe
Topic:__consumer_offsets PartitionCount:50 ReplicationFactor:1 Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producer
Topic: __consumer_offsets Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 1 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 2 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 3 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 4 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 5 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 6 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 7 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 8 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 9 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 10 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 11 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 12 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 13 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 14 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 15 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 16 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 17 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 18 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 19 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 20 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 21 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 22 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 23 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 24 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 25 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 26 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 27 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 28 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 29 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 30 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 31 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 32 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 33 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 34 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 35 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 36 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 37 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 38 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 39 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 40 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 41 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 42 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 43 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 44 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 45 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 46 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 47 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 48 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 49 Leader: 0 Replicas: 0 Isr: 0
的确,我第一次启动使用者是针对一个copy_factor等于1的主题.在那个时候,使用者仅创建了一个副本.该主题不再更新,如果代理0下降,其他代理将无法看到该主题.
Indeed, the first time I have started a consumer it was for a topic with a replication_factor equal to 1. In that time, the consumer created only one replica. That topic was not anymore updated and then the other brokers were not able to see it if broker 0 was going down.
可以通过以下命令为每个分区建立新的复制级别 bin/kafka-reassign-partitions.sh --zookeeper本地主机:2181 --reassignment-json-file new_reassignment.json --execute
It is possible to establish a new replication level per partition by the commandbin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file new_reassignment.json --execute
在那里使用的 new_reassignment.json json文件具有以下内容
The new_reassignment.json json file used there has the following content
{"version":1,"partitions":[
{"topic":"__consumer_offsets","partition":0,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":1,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":2,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":3,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":4,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":5,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":6,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":7,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":8,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":9,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":10,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":11,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":12,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":13,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":14,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":15,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":16,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":17,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":18,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":19,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":20,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":21,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":22,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":23,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":24,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":25,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":26,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":27,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":28,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":29,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":30,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":31,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":32,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":33,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":34,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":35,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":36,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":37,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":38,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":39,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":40,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":41,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":42,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":43,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":44,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":45,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":46,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":47,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":48,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":49,"replicas":[0,1,2],"log_dirs":["any","any","any"]}]}
此时,可以成功管理任何代理的失败.
At this point, it is possible to manage the failure of any broker with sucess.
此特定主题中的分区数默认设置为50.消费者触发其创建.消费者主要使用该主题来提交每个主题(分区)的已接收消息.如果此__consumer_offsets主题对其他代理不可见,则代理将无法通知消费者使用新消息.
The number of partitions in this particular topic is set per default to 50.The consumer triggers its creation. The topic is mainly used by the consumer to commit the received message for each topic:partion. If this __consumer_offsets topic is not visible to the other brokers, the brokers will not have a way to inform the consumer to consume new messages.
这篇关于使用者仅从具有复制因子3和分区1的3代理配置中读取特定的代理.的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!