并行处理多个Kafka主题

并行处理多个Kafka主题

本文介绍了Spark:并行处理多个Kafka主题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用spark 1.5.2.我需要使用kafka作为流源运行spark流作业.我需要阅读kafka中的多个主题,并对每个主题进行不同的处理.

I am using spark 1.5.2. I need to run spark streaming job with kafka as the streaming source. I need to read from multiple topics within kafka and process each topic differently.

  1. 在同一工作中这样做是个好主意吗?如果是这样,我是否应该为每个主题创建具有多个分区的单个流或不同的流?
  2. 我正在使用kafka直接蒸汽.据我所知,Spark为每个分区启动了运行时间较长的接收器.我有一个相对较小的集群,每个集群有6个节点,每个节点有4个核心.如果我有很多主题,每个主题都有很多分区,是否会因为大多数执行者忙于长时间运行的接收者而影响效率? 如果我的理解错了,请纠正我

推荐答案

我进行了以下观察,以防它对某人有所帮助:

I made the following observations, in case its helpful for someone:

  1. 在kafka直接流中,接收器不会作为长时间运行的任务运行.在每个批次间隔开始时,首先在执行程序中从kafka读取数据.读取后,处理部分将接管.
  2. 如果我们创建包含多个主题的单个流,则将一个接一个地读取主题.此外,对dstream进行过滤以应用不同的处理逻辑将为工作增加另一步
  3. 创建多个流将以两种方式提供帮助:1.您无需应用筛选操作即可以不同方式处理不同的主题. 2.您可以并行读取多个流(与之相比,单个流一个接一个).为此,有一个未记录的配置参数spark.streaming.concurrentJobs*.因此,我决定创建多个流.

  1. In kafka direct stream, the receivers are not run as long running tasks. At the beginning of each batch inerval, first the data is read from kafka in executors. Once read, the processing part takes over.
  2. If we create a single stream with multiple topics, the topics are read one after the other. Also, filtering the dstream for applying different processing logic would add another step to the job
  3. Creating multiple streams would help in two ways: 1. You don't need to apply the filter operation to process different topics differently. 2. You can read multiple streams in parallel (as opposed to one by one in case of single stream). To do so, there is an undocumented config parameter spark.streaming.concurrentJobs*. So, I decided to create multiple streams.

sparkConf.set("spark.streaming.concurrentJobs", "4");

这篇关于Spark:并行处理多个Kafka主题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-06 04:44