本文介绍了管道破裂后如何修复代理可能不可用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我通过 ssh 连接到服务器,我启动了我的 Zookeper kafka 和我的 debezium 连接器,过了一会儿,只有 kafka 终端选项卡被踢出以下错误packet_write_wait:连接**.**.***.*** 22端口:管道损坏

i connect to server through ssh, i launch my zookeper kafka, and my debezium connector, after a while only the kafka terminal tab get's kicked out with the following errorpacket_write_wait: Connection to **.**.***.*** port 22: Broken pipe

我的连接器输出是:

>>>>[2019-07-10 10:04:49,563] WARN [Producer clientId=producer-1] >>>>Connection to node 0 (ip-***.**.**.***.eu-
>>>>west-3.compute.internal/***.**.**.***:9092) could not be established.
>>>>Broker may not be available.
>>>>(org.apache.kafka.clients.NetworkClient:725)

>>>>[2019-07-10 10:04:49,676] ERROR WorkerSourceTask{id=mongodb-source-
>>>>connector-0} Failed to flush, timed out while waiting for producer to
>>>>flush outstanding 8 messages
>>>>(org.apache.kafka.connect.runtime.WorkerSourceTask:420)

>>>>[2019-07-10 10:04:49,676] ERROR WorkerSourceTask{id=mongodb-source-
>>>>connector-0} Failed to commit offsets
>>>>(org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter:111)

我不想每次发生这种情况时都手动重新启动,我该如何解决这个问题,这样我只能通过 ssh 一次启动服务器和连接器然后退出?.

i don't want to restart manually everytime that happends, how can i fix this so i can only ssh one time launch the servers and connector then exit?.

推荐答案

好吧,我所做的是sudo systemctl 启用 confluent-zookeeper须藤 systemctl 启用融合卡夫卡sudo systemctl start confluent-zookeeper我得到了文件错误的访问权限,我已经修改了它,现在zookeeper 工作正常.须藤 systemctl 启动融合卡夫卡我有一个错误仍然无法修复,这是输出

Alright so what i did was sudo systemctl enable confluent-zookeepersudo systemctl enable confluent-kafkasudo systemctl start confluent-zookeeperi got a acces to file error, i've chmod it and now zookeeper works fine.sudo systemctl start confluent-kafkai got a error still couldn't fix , this is the output

at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.j
at java.nio.channels.FileChannel.open(FileChannel.java:287)
at java.nio.channels.FileChannel.open(FileChannel.java:335)
at org.apache.kafka.common.record.FileRecords.openChannel(FileRecords.java:4
at org.apache.kafka.common.record.FileRecords.open(FileRecords.java:410)
at org.apache.kafka.common.record.FileRecords.open(FileRecords.java:419)

这篇关于管道破裂后如何修复代理可能不可用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

05-31 05:03