本文介绍了星火与码头工人集装箱单机集群SPARK_PUBLIC_DNS和SPARK_LOCAL_IP的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

到目前为止,我只在Linux机器上和虚拟机(桥接网络)上运行星火,但现在我在利用多台计算机作为奴隶有趣。这将是很方便的分发计算机上火花从泊坞窗的容器,并自动让他们自己连接到一个硬codeD星火主机IP。这个简短的作品已经,但我有奴隶容器配置权SPARK_LOCAL_IP麻烦(或start-slave.sh --host参数)。

So far I have run Spark only on Linux machines and VMs (bridged networking) but now I am interesting on utilizing more computers as slaves. It would be handy to distribute a Spark Slave Docker container on computers and having them automatically connecting themselves to a hard-coded Spark master ip. This short of works already but I am having trouble configuring the right SPARK_LOCAL_IP (or --host parameter for start-slave.sh) on slave containers.

我想我正确配置了SPARK_PUBLIC_DNS环境变量来匹配主机的网络访问的IP(从10.0.x.x地址空间),至少它是由所有机器上显示星火主Web用户界面和访问。

I think I correctly configured the SPARK_PUBLIC_DNS env variable to match the host machine's network-accessible ip (from 10.0.x.x address space), at least it is shown on Spark master web UI and accessible by all machines.

我还设置SPARK_WORKER_OPTS和码头工人端口转发作为的,但对我来说星火主站是其他机器上,而不是里面的泊坞运行。我发动星火工作从其他机器在网络中,也可能运行一个奴隶本身。

I have also set SPARK_WORKER_OPTS and Docker port forwards as instructed at http://sometechshit.blogspot.ru/2015/04/running-spark-standalone-cluster-in.html, but in my case the Spark master is running on an other machine and not inside Docker. I am launching Spark jobs from an other machine within the network, possibly also running a slave itself.

这是我试过的东西:


  1. 不要配置SPARK_LOCAL_IP可言,从结合到容器的IP(如172.17.0.45),不能从主机或驱动程序连接,计算仍然工作的大部分时间,但并不总是

  2. 绑定到0.0.0.0,奴婢说话要掌握并建立一定的联系,但它死了,一个其他从显示出来了又走了,他们继续循环像这样

  3. 绑定到主机IP,开始作为IP不是在容器内可见,但会被其他人访问的端口转发配置失败

我不知道为什么没有被使用的配置SPARK_PUBLIC_DNS连接到奴隶的时候?我以为SPARK_LOCAL_IP只会在本地绑定,但没有被透露给外部计算机的影响。

I wonder why isn't the configured SPARK_PUBLIC_DNS being used when connecting to slaves? I thought SPARK_LOCAL_IP would only affect on local binding but not being revealed to external computers.

在https://databricks.gitbooks.io/databricks-spark-knowledge-base/content/troubleshooting/connectivity_issues.html他们指导,以SPARK_LOCAL_IP设置为司机师傅,和工作进程集群寻址的主机名,这是唯一的选择?我会避免额外的DNS配置,只需使用IPS在计算机之间配置的流量。还是有一个简单的方法来实现这一目标?

At https://databricks.gitbooks.io/databricks-spark-knowledge-base/content/troubleshooting/connectivity_issues.html they instruct to "set SPARK_LOCAL_IP to a cluster-addressable hostname for the driver, master, and worker processes", is this the only option? I would avoid the extra DNS configuration and just use ips to configure traffic between computers. Or is there an easy way to achieve this?

编辑:
总结目前的设置:

To summarize the current set-up:


  • 主运行在Linux上(VM VirtualBox的在Windows上使用桥接网络)

  • 驱动程序从其他Windows机器提交的作业,工程巨大

  • 用于启动奴隶泊坞窗图像被分配按保存.tar.gz文件,加载(卷曲XYZ | gunzip解|码头工人负载),并开始在网络内其他机器,有这个probem与私人/公共IP配置

推荐答案

我想我找到我的用例(一个Spark容器/主机操作系统)的解决方案:

I think I found a solution for my use-case (one Spark container / host OS):


  1. 使用 - 网主机泊坞窗运行 =>主机的eth0的是在容器中可见

  2. 设置 SPARK_PUBLIC_DNS SPARK_LOCAL_IP 来主机的IP,忽视docker0的172.x.x.x地址

  1. Use --net host with docker run => host's eth0 is visible in the container
  2. Set SPARK_PUBLIC_DNS and SPARK_LOCAL_IP to host's ip, ignore the docker0's 172.x.x.x address

星火可以绑定到主机的IP和其他机器通信,它为好,端口转发负责剩下的照顾。 DNS或没有需要任何复杂的CONFIGS,我还没有彻底测试这一点,但到目前为止好。

Spark can bind to the host's ip and other machines communicate to it as well, port forwarding takes care of the rest. DNS or any complex configs were not needed, I haven't thoroughly tested this but so far so good.

这篇关于星火与码头工人集装箱单机集群SPARK_PUBLIC_DNS和SPARK_LOCAL_IP的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-23 06:09