问题描述
我有带有以下图像和配置的docker-compose.yml
I have docker-compose.yml with below image and configuration
version: '3'
services:
spark-master:
image: bde2020/spark-master:2.4.4-hadoop2.7
container_name: spark-master
ports:
- "8080:8080"
- "7077:7077"
environment:
- INIT_DAEMON_STEP=setup_spark
spark-worker-1:
image: bde2020/spark-worker:2.4.4-hadoop2.7
container_name: spark-worker-1
depends_on:
- spark-master
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
此处是docker-compose up日志---> https://jpst.it/1Xc4K
here the docker-compose up log ---> https://jpst.it/1Xc4K
此处的容器已启动并正在运行,我的意思是spark工人没有任何问题地连接到spark master,现在的问题是我创建了drone.yml,并在其中添加了
and here containers up and running and i mean spark worker connected to spark master without any issues , now problem is i created drone.yml and where i added services component with
services:
jce-cassandra:
image: cassandra:3.0
ports:
- "9042:9042"
jce-elastic:
image: elasticsearch:5.6.16-alpine
ports:
- "9200:9200"
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
janusgraph:
image: janusgraph/janusgraph:latest
ports:
- "8182:8182"
environment:
JANUS_PROPS_TEMPLATE: cassandra-es
janusgraph.storage.backend: cql
janusgraph.storage.hostname: jce-cassandra
janusgraph.index.search.backend: elasticsearch
janusgraph.index.search.hostname: jce-elastic
depends_on:
- jce-elastic
- jce-cassandra
spark-master:
image: bde2020/spark-master:2.4.4-hadoop2.7
container_name: spark-master
ports:
- "8080:8080"
- "7077:7077"
environment:
- INIT_DAEMON_STEP=setup_spark
spark-worker-1:
image: bde2020/spark-worker:2.4.4-hadoop2.7
container_name: spark-worker-1
depends_on:
- spark-master
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
但是这里的spark worker没有连接到spark master获得异常,这里是异常日志详细信息,可以一个请指导我为什么要面对这个问题
but here spark worker is not connected to spark master getting exceptions, here is exception log details , can some one please guide me why am facing this issue
注意:我正在尝试在drone.yml中创建这些服务以进行集成测试
Note : I am trying to create these services in drone.yml for my integration testing
推荐答案
为更好的格式进行回答.评论建议睡觉.假设这是dockerfile( https://hub.docker.com/r/bde2020/spark-worker/dockerfile ),您可以通过添加以下命令来入睡:
Answering for better formatting. The comments suggest sleeping. Assuming this is the dockerfile (https://hub.docker.com/r/bde2020/spark-worker/dockerfile) You could sleep by adding the command:
spark-worker-1:
image: bde2020/spark-worker:2.4.4-hadoop2.7
container_name: spark-worker-1
command: sleep 10 && /bin/bash /worker.sh
depends_on:
- spark-master
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
虽然睡眠10可能过多,但如果sleep 5
或sleep 2
Although sleep 10 is probably excessive, if this would would sleep 5
or sleep 2
这篇关于使用drone.yml创建Spark集群不起作用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!