我有一个消费者脚本,它处理每条消息并手动向主题提交偏移量。CONSUMER = KafkaConsumer( KAFKA_TOPIC, bootstrap_servers=[KAFKA_SERVER], auto_offset_reset="earliest", max_poll_records=100, enable_auto_commit=False, group_id=CONSUMER_GROUP, # Use the RoundRobinPartition method partition_assignment_strategy=[RoundRobinPartitionAssignor], value_deserializer=lambda x: json.loads(x.decode('utf-8')))while True: count += 1 LOGGER.info("--------------Poll {0}---------".format(count)) for msg in CONSUMER: # Process msg.value # Commit offset to topic tp = TopicPartition(msg.topic, msg.partition) offsets = {tp: OffsetAndMetadata(msg.offset, None)} CONSUMER.commit(offsets=offsets)处理每条消息所需的时间 我收到此错误错误:kafka.errors.CommitFailedError: CommitFailedError: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max_poll_interval_ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the rebalance timeout with max_poll_interval_ms, or by reducing the maximum size of batches returned in poll() with max_poll_records.Process finished with exit code 1期待:a) 如何修复这个错误?b) 如何确保我的手动提交工作正常?c) 提交偏移量的正确方法。我已经经历了这个,但是 Difference between session.timeout.ms and max.poll.interval.ms for Kafka 0.10.0.0 and later versions 了解我的问题,非常感谢任何关于调整轮询、 session 或心跳时间的帮助。Apache 卡夫卡:2.11-2.1.0卡夫卡 python :1.4.4 最佳答案 消费者的 session.timeout.ms 应该小于 Kafka 代理上的 group.max.session.timeout.ms。关于python - 手动提交偏移量到 kafka 主题的正确方法是什么,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/54569082/
10-11 01:02