本文介绍了YARN中作业的聚合资源分配的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是Hadoop的新手。当我运行作业时,我看到该作业的聚合资源分配为251248654 MB秒,24462 vcore秒。但是,当我找到有关群集的详细信息时,它显示有888个Vcores-总计和15.90 TB内存总计。谁能告诉我这是如何相关的? MB-second和Vcore-seconds指的是什么工作。



有没有任何材料可以在线了解这些信息?我尝试了冲浪,并得到了一个正确的答案。

VCores-Total:表示总数群集中可用的VCore数量
内存总量:表示群集中可用的总内存量。

例如我有一个单节点集群,其中,我已经将每个容器的内存需求设置为:1228 MB(由配置决定: yarn.scheduler.minimum-allocation-mb )和每个容器的vCore到1个vCore(由配置决定: yarn.scheduler.minimum-allocation-vcores )。

我已将 yarn.nodemanager.resource.memory-mb 设置为9830 MB。因此,每个节点总共可以有8个容器(9830/1228 = 8)。

因此,对于我的集群:

  VCores-Total = 1(节点)* 8(容器)* 1(每个容器的vCore)= 8 
内存总数= 1(节点)* 8容器)* 1228 MB(每个容器的内存)= 9824 MB = 9.59375 GB = 9.6 GB

下面显示了我的群集指标:



现在让我们看看MB-seconds strong>和vcore-seconds
根据代码描述(ApplicationResourceUsageReport.java):

MB-seconds :内存总量兆字节),应用程序分配的次数是应用程序运行的秒数。

vcore-seconds :vcore的聚合数应用程序已经分配了应用程序已经运行的秒数。



描述是自我解释的(记住关键字:Aggregated)。

让我用一个例子来解释一下。
我运行了一个DistCp作业(产生了25个容器),为此我得到了以下内容:

 聚合资源分配:10361661 MB-seconds,8424 vcore-seconds 

现在,让我们粗略计算多少时间每个容器需要:

 对于内存:
10361661 MB-seconds = 10361661/25(容器)/ 1228 MB每个容器的内存)= 337.51秒= 5.62分钟

对于CPU
8424 vcore-seconds = 8424/25(容器)/ 1(每个容器的vCore)= 336.96秒= 5.616分钟

平均而言,每个容器需要5.62分钟才能执行。



希望这个清楚。您可以执行工作并亲自确认。


I am new to Hadoop. When i run a job, i see the aggregate resource allocation for that job as 251248654 MB-seconds, 24462 vcore-seconds. However, when i find the details about the cluster, it shows there are 888 Vcores-total and 15.90 TB Memory-total. Can anyone tell me how this is related? what does MB-second and Vcore-seconds refer to for the job.

Is there any material online to know these? I tried surfing, dint get a proper answer

解决方案
VCores-Total: Indicates the total number of VCores available in the cluster
Memory-Total: Indicates the total memory available in the cluster.

For e.g. I have a single node cluster, where, I have set memory requirements per container to be: 1228 MB (determined by config: yarn.scheduler.minimum-allocation-mb) and vCores per container to 1 vCore (determined by config: yarn.scheduler.minimum-allocation-vcores).

I have set: yarn.nodemanager.resource.memory-mb to 9830 MB. So, there can be totally 8 containers per node (9830 / 1228 = 8).

So, for my cluster:

VCores-Total = 1 (node) * 8 (containers) * 1 (vCore per container) = 8
Memory-Total = 1 (node) * 8 (containers) * 1228 MB (memory per container) = 9824 MB = 9.59375 GB = 9.6 GB

The figure below, shows my cluster metrics:

Now let's see "MB-seconds" and "vcore-seconds".As per the description in the code (ApplicationResourceUsageReport.java):

MB-seconds: The aggregated amount of memory (in megabytes) the application has allocated times the number of seconds the application has been running.

vcore-seconds: The aggregated number of vcores that the application has allocated times the number of seconds the application has been running.

The description is self explanatory (remember the key word: Aggregated).

Let me explain this with an example.I ran a DistCp job (which spawned 25 containers), for which I got the following:

Aggregate Resource Allocation : 10361661 MB-seconds, 8424 vcore-seconds

Now, let's do some rough calculation on how much time each container took:

For memory:
10361661 MB-seconds = 10361661 / 25 (containers) / 1228 MB (memory per container) = 337.51 seconds = 5.62 minutes

For CPU
8424 vcore-seconds = 8424 / 25 (containers) / 1 (vCore per container) = 336.96 seconds = 5.616 minutes

This indicates on an average, each container took 5.62 minutes to execute.

Hope this makes it clear. You can execute a job and confirm it yourself.

这篇关于YARN中作业的聚合资源分配的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-09 19:14