为了尝试通过Amazon EMR解决performance issues,我尝试使用s3distcp
将文件从S3复制到我的EMR集群以进行本地处理。作为第一个测试,我使用--groupBy
选项从一个目录中复制了一天的数据,即2160个文件,将它们折叠成一个(或几个)文件。
这项工作似乎进行得很好,向我展示了 map /缩小进度达到100%,但此刻过程挂起,再也没有回来。我怎么知道发生了什么事?
源文件是存储在S3中的Gzip文本文件,每个大约30kb。这是一个普通的Amazon EMR集群,我正在从主节点的 shell 运行s3distcp。
hadoop@ip-xxx:~$ hadoop jar /home/hadoop/lib/emr-s3distcp-1.0.jar --src s3n://xxx/click/20140520 --dest hdfs:////data/click/20140520 --groupBy ".*(20140520).*" --outputCodec lzo
14/05/21 20:06:32 INFO s3distcp.S3DistCp: Running with args: [Ljava.lang.String;@26f3bbad
14/05/21 20:06:35 INFO s3distcp.S3DistCp: Using output path 'hdfs:/tmp/9f423c59-ec3a-465e-8632-ae449d45411a/output'
14/05/21 20:06:35 INFO s3distcp.S3DistCp: GET http://169.254.169.254/latest/meta-data/placement/availability-zone result: us-west-2b
14/05/21 20:06:35 INFO s3distcp.S3DistCp: Created AmazonS3Client with conf KeyId AKIAJ5KT6QSV666K6KHA
14/05/21 20:06:37 INFO s3distcp.FileInfoListing: Opening new file: hdfs:/tmp/9f423c59-ec3a-465e-8632-ae449d45411a/files/1
14/05/21 20:06:38 INFO s3distcp.S3DistCp: Created 1 files to copy 2160 files
14/05/21 20:06:38 INFO mapred.JobClient: Default number of map tasks: null
14/05/21 20:06:38 INFO mapred.JobClient: Setting default number of map tasks based on cluster size to : 72
14/05/21 20:06:38 INFO mapred.JobClient: Default number of reduce tasks: 3
14/05/21 20:06:39 INFO security.ShellBasedUnixGroupsMapping: add hadoop to shell userGroupsCache
14/05/21 20:06:39 INFO mapred.JobClient: Setting group to hadoop
14/05/21 20:06:39 INFO mapred.FileInputFormat: Total input paths to process : 1
14/05/21 20:06:39 INFO mapred.JobClient: Running job: job_201405211343_0031
14/05/21 20:06:40 INFO mapred.JobClient: map 0% reduce 0%
14/05/21 20:06:53 INFO mapred.JobClient: map 1% reduce 0%
14/05/21 20:06:56 INFO mapred.JobClient: map 4% reduce 0%
14/05/21 20:06:59 INFO mapred.JobClient: map 36% reduce 0%
14/05/21 20:07:00 INFO mapred.JobClient: map 44% reduce 0%
14/05/21 20:07:02 INFO mapred.JobClient: map 54% reduce 0%
14/05/21 20:07:05 INFO mapred.JobClient: map 86% reduce 0%
14/05/21 20:07:06 INFO mapred.JobClient: map 94% reduce 0%
14/05/21 20:07:08 INFO mapred.JobClient: map 100% reduce 10%
14/05/21 20:07:11 INFO mapred.JobClient: map 100% reduce 19%
14/05/21 20:07:14 INFO mapred.JobClient: map 100% reduce 27%
14/05/21 20:07:17 INFO mapred.JobClient: map 100% reduce 29%
14/05/21 20:07:20 INFO mapred.JobClient: map 100% reduce 100%
[hangs here]
作业显示为:
hadoop@xxx:~$ hadoop job -list
1 job currently running
JobId State StartTime UserName Priority SchedulingInfo
job_201405211343_0031 1 1400702799339 hadoop NORMAL NA
而且目标HDFS目录中没有任何内容:
hadoop@xxx:~$ hadoop dfs -ls /data/click/
有任何想法吗?
最佳答案
hadoop @ ip-xxx:〜$ hadoop jar /home/hadoop/lib/emr-s3distcp-1.0.jar --src s3n:// xxx / click / 20140520 ** / ** --dest hdfs:///// data / click / 20140520 ** / ** --groupBy“。(20140520)。”。 --outputCodec lzo
我遇到了类似的问题。我只需要在目录末尾加一个斜杠即可。因此,它完成并显示统计数据,之前它的挂起率为100%