问题描述
我正在 Yarn 上运行单个 flink 作业 这里.
I am running a single flink job on Yarn as descriped here.
flink run -m yarn-cluster -yn 3 -ytm 12000
我可以通过上面的参数-yn
来设置纱线节点/任务管理器的数量.但是我想知道是否可以设置每个任务管理器的任务槽数.当我使用 parallelsim (-p
) 参数时,它只设置整体并行度.并且通过将这个值除以提供的任务管理器的数量来计算任务槽的数量.我尝试使用动态属性 (-yD
) 参数,该参数应该允许用户指定其他配置值",如下所示:
I can set the number of yarn nodes / task managers with the above parameter -yn
. However I want to know whether it is possible to set the number of task slots per task manager. When I use the parallelsim (-p
) parameter it only sets the overall parallelism. And the number of task slots is computed by dividing this value by the number of provided task managers. I tried using the dynamic properties (-yD
) parameter which is supposed to "allow the user to specify additional configuration values" like this:
-yD -Dtaskmanager.numberOfTaskSlots=8
但这不会覆盖 flink-conf.yaml
中给出的值.在 flink 上运行单个任务时(除了更改配置文件),有没有办法指定每个 TaskManager 的任务槽数?是否还有使用 -yD
参数有效的动态属性的文档?
But this does not overwrite the value given in the flink-conf.yaml
.Is there any way to specify the number of task slots per TaskManager when running a single on flink (other than changing the config file)?Also is there a documentation which dynamic properties are valid using the -yD
parameter?
推荐答案
可以使用yarn-session的设置,这里,前缀为 y
用于在 YARN 集群上提交 Flink 作业.例如命令,
You can use the settings of yarn-session, here, prefixed by y
to submit Flink job on YARN cluster. For example the command,
flink run -m yarn-cluster -yn 5 -yjm 768 -ytm 1400 -ys 2 -yqu streamQ my_program.jar
将提交my_program.jar
Flink 应用程序和5
容器,768m
内存用于jobmanager、1400m
内存和 2
个用于 taskmanagers 的 cpu 核心,每个都将使用预定义 YARN 队列上的节点管理器资源 流Q
.请参阅我对这篇帖子的回答以获取其他信息重要信息.
will submit my_program.jar
Flink application with 5
containers, 768m
memory for the jobmanager, 1400m
memory and 2
cpu core for taskmanagers, each and will use the resources of nodemanagers on predefined YARN queue streamQ
. See my answer to this post for other important information.
这篇关于Flink 1.3 在 YARN 上运行单个作业如何设置每个 TaskManager 的任务槽数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!