问题描述
我正在使用kafka生产商将价格发送给主题.当我发送第一条消息时,它会打印生产者配置,然后发送消息,因此,发送第一条消息会花费更多时间.
I am using kafka producer to send prices to topic. When I send first message it prints producer config and then send message due to this it takes more time to send first message .
在收到第一条消息后,发送消息几乎不需要1/2毫秒.
After first message it tooks hardly 1/2 milliseconds to send a message .
我的问题是我们可以做些什么以便使配置部分跳过或者我们可以在发送第一条消息之前开始吗?
My question is can we do something so that configuration part will skip or we can start before to send first message ?
我在我的项目中使用spring kafka.我也阅读了其他问题,但并没有真正的帮助.
I am using spring kafka into my project. I read other question also but not really helpful .
Application.yml
Application.yml
server:
port: 8081
spring:
kafka:
bootstrap-servers: ***.***.*.***:9092
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
生产者价值:
acks = 1
batch.size = 16384
bootstrap.servers = [192.168.1.190:9092]
buffer.memory = 33554432
client.dns.lookup = default
client.id =
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
我推荐了以下问题,但没有帮助.
I refer following questions but not helped.
推荐答案
在 KafkaProducer.send 方法,Kafka生产者获取该主题的分区元数据.获取元数据会阻止send方法立即返回.Kafka生产者会缓存元数据,因此后续发送要快得多.Kafka生产者为 metadata.max.age.ms (默认为5分钟),然后再次获取元数据以主动发现任何新的代理或分区.
During the first invocation of the KafkaProducer.send method, the Kafka producer fetches the partition metadata for the topic. Fetching the metadata blocks the send method from returning immediately. The Kafka producer caches the metadata, so subsequent sends are much faster. The Kafka producer caches the metadata for metadata.max.age.ms (default 5 minutes), after which it again fetches the metadata to proactively discover any new brokers or partitions.
应用程序启动时,您可以调用 KafkaProducer.partitionsFor 方法来获取和缓存元数据,但是当缓存在5分钟后过期时,下一次发送将很慢,因为它会再次获取元数据.如果您的Kafka环境是静态的,即在您的应用程序运行时未创建新的代理和分区,请考虑将 metadata.max.age.ms
配置为很长的持续时间,因此元数据在缓存中保留的时间更长.
When your application starts, you could invoke the KafkaProducer.partitionsFor method to fetch and cache the metadata, but when the cache expires after 5 minutes, the next send will be slow because it fetches the metadata again. If your Kafka environment is static, that is, new brokers and partitions are not created while your application is running, then consider configuring metadata.max.age.ms
to a very long time duration, so the metadata is kept in the cache longer.
这篇关于为什么Kafka生产者在收到第一个消息时很慢?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!